WorldWideScience

Sample records for filtering techniques applied

  1. Advanced Filtering Techniques Applied to Spaceflight, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — IST-Rolla developed two nonlinear filters for spacecraft orbit determination during the Phase I contract. The theta-D filter and the cost based filter, CBF, were...

  2. A Kalman filter technique applied for medical image reconstruction

    International Nuclear Information System (INIS)

    Goliaei, S.; Ghorshi, S.; Manzuri, M. T.; Mortazavi, M.

    2011-01-01

    Medical images contain information about vital organic tissues inside of human body and are widely used for diagnoses of disease or for surgical purposes. Image reconstruction is essential for medical images for some applications such as suppression of noise or de-blurring the image in order to provide images with better quality and contrast. Due to vital rule of image reconstruction in medical sciences the corresponding algorithms with better efficiency and higher speed is desirable. Most algorithms in image reconstruction are operated on frequency domain such as the most popular one known as filtered back projection. In this paper we introduce a Kalman filter technique which is operated in time domain for medical image reconstruction. Results indicated that as the number of projection increases in both normal collected ray sum and the collected ray sum corrupted by noise the quality of reconstructed image becomes better in terms of contract and transparency. It is also seen that as the number of projection increases the error index decreases.

  3. Study of different filtering techniques applied to spectra from airborne gamma spectrometry.

    Science.gov (United States)

    Wilhelm, Emilien; Gutierrez, Sébastien; Arbor, Nicolas; Ménard, Stéphanie; Nourreddine, Abdel-Mjid

    2016-11-01

    One of the features of the spectra obtained by airborne gamma spectrometry is the low counting statistics due to a short acquisition time (1 s) and a large source-detector distance (40 m) which leads to large statistical fluctuations. These fluctuations bring large uncertainty in radionuclide identification and determination of their respective activities from the window method recommended by the IAEA, especially for low-level radioactivity. Different types of filter could be used on spectra in order to remove these statistical fluctuations. The present work compares the results obtained with filters in terms of errors over the whole gamma energy range of the filtered spectra with the window method. These results are used to determine which filtering technique is the most suitable in combination with some method for total stripping of the spectrum. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Neutron Filter Technique and its use for Fundamental and applied Investigations

    International Nuclear Information System (INIS)

    Gritzay, V.; Kolotyi, V.

    2008-01-01

    At Kyiv Research Reactor (KRR) the neutron filtered beam technique is used for more than 30 years and its development continues, the new and updated facilities for neutron cross section measurements provide the receipt of neutron cross sections with rather high accuracy: total neutron cross sections with accuracy 1% and better, neutron scattering cross sections with 3-6% accuracy. The main purpose of this paper is presentation of the neutron measurement techniques, developed at KRR, and demonstration some experimental results, obtained using these techniques

  5. Applying a particle filtering technique for canola crop growth stage estimation in Canada

    Science.gov (United States)

    Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi

    2017-10-01

    Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.

  6. Digital filtering techniques applied to electric power systems protection; Tecnicas de filtragem digital aplicadas a protecao de sistemas eletricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Brito, Helio Glauco Ferreira

    1996-12-31

    This work introduces an analysis and a comparative study of some of the techniques for digital filtering of the voltage and current waveforms from faulted transmission lines. This study is of fundamental importance for the development of algorithms applied to digital protection of electric power systems. The techniques studied are based on the Discrete Fourier Transform theory, the Walsh functions and the Kalman filter theory. Two aspects were emphasized in this study: Firstly, the non-recursive techniques were analysed with the implementation of filters based on Fourier theory and the Walsh functions. Secondly, recursive techniques were analyzed, with the implementation of the filters based on the Kalman theory and once more on the Fourier theory. (author) 56 refs., 25 figs., 16 tabs.

  7. Inverse Filtering Techniques in Speech Analysis | Nwachuku ...

    African Journals Online (AJOL)

    inverse filtering' has been applied. The unifying features of these techniques are presented, namely: 1. a basis in the source-filter theory of speech production, 2. the use of a network whose transfer function is the inverse of the transfer function of ...

  8. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  9. Applied ALARA techniques

    International Nuclear Information System (INIS)

    Waggoner, L.O.

    1998-01-01

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work

  10. Magnetic filtered plasma deposition and implantation technique

    CERN Document Server

    Zhang Hui Xing; Wu Xian Ying

    2002-01-01

    A high dense metal plasma can be produced by using cathodic vacuum arc discharge technique. The microparticles emitted from the cathode in the metal plasma can be removed when the metal plasma passes through the magnetic filter. It is a new technique for making high quality, fine and close thin films which have very widespread applications. The authors describe the applications of cathodic vacuum arc technique, and then a filtered plasma deposition and ion implantation system as well as its applications

  11. Specific filters applied in nuclear medicine services

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Vitor S.; Crispim, Verginia R., E-mail: verginia@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Brandao, Luis E.B. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ) Rio de Janeiro, RJ (Brazil)

    2011-07-01

    In Nuclear Medicine, radioiodine, in various chemical forms, is a key tracer used in diagnostic practices and/or therapy. Due to its high volatility, medical professionals may incorporate radioactive iodine during the preparation of the dose to be administered to the patient. In radioactive iodine therapy doses ranging from 3.7 to 7.4 GBq per patient are employed. Thus, aiming at reducing the risk of occupational contamination, we developed a low cost filter to be installed at the exit of the exhaust system where doses of radioactive iodine are fractionated, using domestic technology. The effectiveness of radioactive iodine retention by silver impregnated silica [10%] crystals and natural activated carbon was verified using radiotracer techniques. The results showed that natural activated carbon is effective for I{sub 2} capture for a large or small amount of substrate but its use is restricted due to its low flash point (150 deg C). Besides, when poisoned by organic solvents, this flash point may become lower, causing explosions if absorbing large amounts of nitrates. To hold the CH{sub 3}I gas, it was necessary to increase the volume of natural activated carbon since it was not absorbed by SiO{sub 2} + Ag crystals. We concluded that, for an exhaust flow range of (306 {+-} 4) m{sup 3}/h, a double stage filter using SiO{sub 2} + Ag in the first stage and natural activated carbon in the second is sufficient to meet radiological safety requirements. (author)

  12. New techniques for far-infrared filters.

    Science.gov (United States)

    Armstrong, K. R.; Low, F. J.

    1973-01-01

    The techniques considered make it possible to construct high performance low-pass wide-band, and medium-band filters at wavelengths in the range from 25 to 300 micrometers. Short wavelength rejection without appreciable loss at long wavelengths is achieved by means of small particle scattering. Spectral definition in the far infrared is obtained by cooling one or more crystalline materials to liquid-He or liquid-nitrogen temperatures. The problem of reflection losses at the various surfaces is solved by a new antireflection coating technique.

  13. Correlation-based nonlinear composite filters applied to image recognition

    Science.gov (United States)

    Martínez-Díaz, Saúl

    2010-08-01

    Correlation-based pattern recognition has been an area of extensive research in the past few decades. Recently, composite nonlinear correlation filters invariants to translation, rotation, and scale were proposed. The design of the filters is based on logical operations and nonlinear correlation. In this work nonlinear filters are designed and applied to non-homogeneously illuminated images acquired with an optical microscope. Images are embedded into cluttered background, non-homogeneously illuminated and corrupted by random noise, which makes difficult the recognition task. Performance of nonlinear composite filters is compared with performance of other composite correlation filters, in terms discrimination capability.

  14. Particle Filtering Applied to Musical Tempo Tracking

    Directory of Open Access Journals (Sweden)

    Macleod Malcolm D

    2004-01-01

    Full Text Available This paper explores the use of particle filters for beat tracking in musical audio examples. The aim is to estimate the time-varying tempo process and to find the time locations of beats, as defined by human perception. Two alternative algorithms are presented, one which performs Rao-Blackwellisation to produce an almost deterministic formulation while the second is a formulation which models tempo as a Brownian motion process. The algorithms have been tested on a large and varied database of examples and results are comparable with the current state of the art. The deterministic algorithm gives the better performance of the two algorithms.

  15. Trend Filtering Techniques for Time Series Analysis

    OpenAIRE

    López Arias, Daniel

    2016-01-01

    Time series can be found almost everywhere in our lives and because of this being capable of analysing them is an important task. Most of the time series we can think of are quite noisy, being this one of the main problems to extract information from them. In this work we use Trend Filtering techniques to try to remove this noise from a series and understand the underlying trend of the series, that gives us information about the behaviour of the series aside from the particular...

  16. Advanced Filtering Techniques Applied to Spaceflight Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Spacecraft need accurate position and velocity estimates in order to control their orbits. Some missions require more accurate estimates than others, but nearly all...

  17. Adaptive Control Using Residual Mode Filters Applied to Wind Turbines

    Science.gov (United States)

    Frost, Susan A.; Balas, Mark J.

    2011-01-01

    Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.

  18. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    Science.gov (United States)

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.

  19. Spectral analysis and filter theory in applied geophysics

    CERN Document Server

    Buttkus, Burkhard

    2000-01-01

    This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli­ cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval­ uated, and instructions provided for their practical application. Be­ sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob­ served data, maximum-entropy spectral analysis and maximum-like­ lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation­ ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil­ ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...

  20. Condition Monitoring of a Process Filter Applying Wireless Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2011-05-01

    Full Text Available This paper presents a novel wireless vibration-based method for monitoring the degree of feed filter clogging. In process industry, these filters are applied to prevent impurities entering the process. During operation, the filters gradually become clogged, decreasing the feed flow and, in the worst case, preventing it. The cleaning of the filter should therefore be carried out predictively in order to avoid equipment damage and unnecessary process downtime. The degree of clogging is estimated by first calculating the time domain indices from low frequency accelerometer samples and then taking the median of the processed values. Nine different statistical quantities are compared based on the estimation accuracy and criteria for operating in resource-constrained environments with particular focus on energy efficiency. The initial results show that the method is able to detect the degree of clogging, and the approach may be applicable to filter clogging monitoring.

  1. A Novel Technique for Inferior Vena Cava Filter Extraction

    International Nuclear Information System (INIS)

    Johnston, Edward William; Rowe, Luke Michael Morgan; Brookes, Jocelyn; Raja, Jowad; Hague, Julian

    2014-01-01

    Inferior vena cava (IVC) filters are used to protect against pulmonary embolism in high-risk patients. Whilst the insertion of retrievable IVC filters is gaining popularity, a proportion of such devices cannot be removed using standard techniques. We describe a novel approach for IVC filter removal that involves snaring the filter superiorly along with the use of flexible forceps or laser devices to dissect the filter struts from the caval wall. This technique has used to successfully treat three patients without complications in whom standard techniques failed

  2. Processing techniques applying laser technology

    International Nuclear Information System (INIS)

    Yamada, Yuji; Makino Yoshinobu

    2000-01-01

    The requirements for the processing of nuclear energy equipment include high precision, low distortion, and low heat input. Toshiba has developed laser processing techniques for cutting, welding, and surface heat treatment of nuclear energy equipment because the zone affected by distortion and heat in laser processing is very small. Laser processing contributes to the manufacturing of high-quality and high-reliability equipment and reduces the manufacturing period. (author)

  3. Carbon filter property detection with thermal neutron technique

    International Nuclear Information System (INIS)

    Deng Zhongbo; Han Jun; Li Wenjie

    2003-01-01

    The paper discussed the mechanism that the antigas property of the carbon filter will decrease because of its carbon bed absorbing water from the air while the carbon filter is being stored, and introduced the principle and method of detection the amount of water absorption with thermal neutron technique. Because some certain relation between the antigas property of the carbon filter and the amount of water absorption exists, the decrease degree of the carbon filter antigas property can be estimated through the amount of water absorption, offering a practicable facility technical pathway to quickly non-destructively detect the carbon filter antigas property

  4. Applying polynomial filtering to mass preconditioned Hybrid Monte Carlo

    Science.gov (United States)

    Haar, Taylor; Kamleh, Waseem; Zanotti, James; Nakamura, Yoshifumi

    2017-06-01

    The use of mass preconditioning or Hasenbusch filtering in modern Hybrid Monte Carlo simulations is common. At light quark masses, multiple filters (three or more) are typically used to reduce the cost of generating dynamical gauge fields; however, the task of tuning a large number of Hasenbusch mass terms is non-trivial. The use of short polynomial approximations to the inverse has been shown to provide an effective UV filter for HMC simulations. In this work we investigate the application of polynomial filtering to the mass preconditioned Hybrid Monte Carlo algorithm as a means of introducing many time scales into the molecular dynamics integration with a simplified parameter tuning process. A generalized multi-scale integration scheme that permits arbitrary step-sizes and can be applied to Omelyan-style integrators is also introduced. We find that polynomial-filtered mass-preconditioning (PF-MP) performs as well as or better than standard mass preconditioning, with significantly less fine tuning required.

  5. One-dimensional rainbow technique using Fourier domain filtering.

    Science.gov (United States)

    Wu, Yingchun; Promvongsa, Jantarat; Wu, Xuecheng; Cen, Kefa; Grehan, Gerard; Saengkaew, Sawitree

    2015-11-16

    Rainbow refractometry can measure the refractive index and the size of a droplet simultaneously. The refractive index measurement is extracted from the absolute rainbow scattering angle. Accordingly, the angular calibration is vital for accurate measurements. A new optical design of the one-dimensional rainbow technique is proposed by using a one-dimensional spatial filter in the Fourier domain. The relationship between the scattering angle and the CCD pixel of a recorded rainbow image can be accurately determined by a simple calibration. Moreover, only the light perpendicularly incident on the lens in the angle (φ) direction is selected, which exactly matches the classical inversion algorithm used in rainbow refractometry. Both standard and global one-dimensional rainbow techniques are implemented with the proposed optical design, and are successfully applied to measure the refractive index and the size of a line of n-heptane droplets.

  6. Plasma filtering techniques for nuclear waste remediation.

    Science.gov (United States)

    Gueroult, Renaud; Hobbs, David T; Fisch, Nathaniel J

    2015-10-30

    Nuclear waste cleanup is challenged by the handling of feed stocks that are both unknown and complex. Plasma filtering, operating on dissociated elements, offers advantages over chemical methods in processing such wastes. The costs incurred by plasma mass filtering for nuclear waste pretreatment, before ultimate disposal, are similar to those for chemical pretreatment. However, significant savings might be achieved in minimizing the waste mass. This advantage may be realized over a large range of chemical waste compositions, thereby addressing the heterogeneity of legacy nuclear waste. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Recent Techniques in Design and Implementation of Microwave Planar Filters

    Directory of Open Access Journals (Sweden)

    P. K. Singhal

    2008-12-01

    Full Text Available This paper details the techniques and initiatives made recently for improved response and simultaneous development of microwave planar filters. Although the objective of all the techniques is to design low cost filters with reduced dimensions, compact size with better frequency response, the methodological approaches are quite variant. The paper has gone through extensive analysis of all these techniques, their concept and design procedures.

  8. Synthesis of Band Filters and Equalizers Using Microwav FIR Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Deibele, C.; /Fermilab

    2000-01-01

    It is desired to design a passive bandpass filter with both a linear phase and flat magnitude response within the band and also has steep skirts. Using the properties of both coupled lines and elementary FIR (Finite Impulse Response) signal processing techniques can produce a filter of adequate phase response and magnitude control. The design procedure will first be described and then a sample filter will then be synthesized and results shown.

  9. Systematic design of output filters for audio class-D amplifiers via Simplified Real Frequency Technique

    Science.gov (United States)

    Hintzen, E.; Vennemann, T.; Mathis, W.

    2014-11-01

    In this paper a new filter design concept is proposed and implemented which takes into account the complex loudspeaker impedance. By means of techniques of broadband matching, that has been successfully applied in radio technology, we are able to optimize the reconstruction filter to achieve an overall linear frequency response. Here, a passive filter network is inserted between source and load that matches the complex load impedance to the complex source impedance within a desired frequency range. The design and calculation of the filter is usually done using numerical approximation methods which are known as Real Frequency Techniques (RFT). A first approach to systematic design of reconstruction filters for class-D amplifiers is proposed, using the Simplified Real Frequency Technique (SRFT). Some fundamental considerations are introduced as well as the benefits and challenges of impedance matching between class-D amplifiers and loudspeakers. Current simulation data using MATLAB is presented and supports some first conclusions.

  10. Dimensionality Reduction Applied to Spam Filtering using Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Tiago A. Almeida

    2011-04-01

    Full Text Available In recent years, e-mail spam has become an increasingly important problem with a big economic impact in society. Fortunately, there are different approaches able to automatically detect and remove most of these messages, and the best-known ones are based on Bayesian decision theory. However, the most of these probabilistic approaches have the same difficulty: the high dimensionality of the feature space. Many term selection methods have been proposed in the literature. In this paper, we revise the most popular methods used as term selection techniques with seven different versions of Naive Bayes spam filters.

  11. Optical supervised filtering technique based on Hopfield neural network

    Science.gov (United States)

    Bal, Abdullah

    2004-11-01

    Hopfield neural network is commonly preferred for optimization problems. In image segmentation, conventional Hopfield neural networks (HNN) are formulated as a cost-function-minimization problem to perform gray level thresholding on the image histogram or the pixels' gray levels arranged in a one-dimensional array [R. Sammouda, N. Niki, H. Nishitani, Pattern Rec. 30 (1997) 921-927; K.S. Cheng, J.S. Lin, C.W. Mao, IEEE Trans. Med. Imag. 15 (1996) 560-567; C. Chang, P. Chung, Image and Vision comp. 19 (2001) 669-678]. In this paper, a new high speed supervised filtering technique is proposed for image feature extraction and enhancement problems by modifying the conventional HNN. The essential improvement in this technique is to use 2D convolution operation instead of weight-matrix multiplication. Thereby, neural network based a new filtering technique has been obtained that is required just 3 × 3 sized filter mask matrix instead of large size weight coefficient matrix. Optical implementation of the proposed filtering technique is executed easily using the joint transform correlator. The requirement of non-negative data for optical implementation is provided by bias technique to convert the bipolar data to non-negative data. Simulation results of the proposed optical supervised filtering technique are reported for various feature extraction problems such as edge detection, corner detection, horizontal and vertical line extraction, and fingerprint enhancement.

  12. Preparation of Porcelanite Ceramic Filter by Slip Casting Technique

    Directory of Open Access Journals (Sweden)

    Majid Muhi Shukur

    2016-09-01

    Full Text Available This work is conducted to study producing solid block porcelanite filter from Iraqi porcelanite rocks and kaolin clay (as binder material by slip casting technique, and investigating its ability of removing contaminant (Pentachlorophenol from water via the adsorption mechanism. Four particle sizes (74, 88, 105 and 125 µm of porcelanite powder were used. Each batch of particle size was mixed with (30 wt. % kaolin as a binding material to improve the mechanical properties. After that, the mixtures were formed by slip casting to disk and cylindrical filter samples, and then fired at 500 and 700 °C to specify the effects of particle size of porcelanite, temperature and formation technique on porcelanite filter properties. Some physical, mechanical and chemical tests have been done on filter samples. Multi-experiments were carried out to evaluate the ability of porcelanite to form the filter. Porosity, permeability and maximum pore diameter were increased with increasing porcelanite particle size and decreased by increasing temperature, whereas the density shows the reverse behavior. In addition, bending, compressive and tensile strength of samples were increased by increasing temperature, and decreased with increasing porcelanite particle size. Efficiency of disk filter sample to remove pentachlorophenol was 95.41% at a temperature of 700°C using 74 µm particle size of porcelanite. While the efficiency of cylindrical filter sample was 97.57% at the same conditions.

  13. New filter for iodine applied in nuclear medicine services.

    Science.gov (United States)

    Ramos, V S; Crispim, V R; Brandão, L E B

    2013-12-01

    In Nuclear Medicine, radioiodine, in various chemical forms, is a key tracer used in diagnostic practices and/or therapy. Medical professionals may incorporate radioactive iodine during the preparation of the dose to be administered to the patient. In radioactive iodine therapy doses ranging from 3.7 to 7.4 GBq per patient are employed. Thus, aiming at reducing the risk of occupational contamination, we developed a low cost filter to be installed at the exit of the exhaust system (where doses of radioiodine are handled within fume hoods, and new filters will be installed at their exit), using domestic technology. The effectiveness of radioactive iodine retention by silver impregnated silica [10%] crystals and natural activated carbon was verified using radiotracer techniques. The results showed that natural activated carbon and silver impregnated silica are effective for I2 capture with large or small amounts of substrate but the use of activated carbon is restricted due to its low flash point (423 K). Besides, when poisoned by organic solvents, this flash point may become lower, causing explosions if absorbing large amounts of nitrates. To hold the CH3I gas, it was necessary to use natural activated carbon since it was not absorbed by SiO2+Ag crystals. We concluded that, for an exhaust flow range of (145 ± 2)m(3)/h, a double stage filter using SiO2+Ag in the first stage and natural activated carbon in the second stage is sufficient to meet radiological safety requirements. © 2013 Elsevier Ltd. All rights reserved.

  14. A Study about Kalman Filters Applied to Embedded Sensors

    Science.gov (United States)

    Valade, Aurélien; Acco, Pascal; Grabolosa, Pierre; Fourniols, Jean-Yves

    2017-01-01

    Over the last decade, smart sensors have grown in complexity and can now handle multiple measurement sources. This work establishes a methodology to achieve better estimates of physical values by processing raw measurements within a sensor using multi-physical models and Kalman filters for data fusion. A driving constraint being production cost and power consumption, this methodology focuses on algorithmic complexity while meeting real-time constraints and improving both precision and reliability despite low power processors limitations. Consequently, processing time available for other tasks is maximized. The known problem of estimating a 2D orientation using an inertial measurement unit with automatic gyroscope bias compensation will be used to illustrate the proposed methodology applied to a low power STM32L053 microcontroller. This application shows promising results with a processing time of 1.18 ms at 32 MHz with a 3.8% CPU usage due to the computation at a 26 Hz measurement and estimation rate. PMID:29206187

  15. Early counterpulse technique applied to vacuum interrupters

    International Nuclear Information System (INIS)

    Warren, R.W.

    1979-11-01

    Interruption of dc currents using counterpulse techniques is investigated with vacuum interrupters and a novel approach in which the counterpulse is applied before contact separation. Important increases have been achieved in this way in the maximum interruptible current as well as large reductions in contact erosion. The factors establishing these new limits are presented and ways are discussed to make further improvements

  16. Statistical analysis and Kalman filtering applied to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.

    1990-08-01

    Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)

  17. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  18. Applying AI techniques to improve alarm display effectiveness

    International Nuclear Information System (INIS)

    Gross, J.M.; Birrer, S.A.; Crosberg, D.R.

    1987-01-01

    The Alarm Filtering System (AFS) addresses the problem of information overload in a control room during abnormal operations. Since operators can miss vital information during these periods, systems which emphasize important messages are beneficial. AFS uses the artificial intelligence (AI) technique of object-oriented programming to filter and dynamically prioritize alarm messages. When an alarm's status changes, AFS determines the relative importance of that change according to the current process state. AFS bases that relative importance on relationships the newly changed alarm has with other activated alarms. Evaluations of a alarm importance take place without regard to the activation sequence of alarm signals. The United States Department of Energy has applied for a patent on the approach used in this software. The approach was originally developed by EG and G Idaho for a nuclear reactor control room

  19. Scaling function based on Chinese remainder theorem applied to a recursive filter design

    Directory of Open Access Journals (Sweden)

    Stamenković Negovan

    2014-01-01

    Full Text Available Implementation of IIR filters in residue number system (RNS architecture is more complex in comparison to FIR filters, due to introduction of the scaling function. This function performs operation of division by a constant factor, which is usually the power of two, and after that the operation of rounding. In that way dynamic range reduction in digital systems is achieved. There are different methods for scaling operation implementation, already presented in references. In this paper, some RNS dynamic reduction techniques have been analyzed and then application of one selected technique has been presented on example. In all RNS calculations the power of two moduli set {2n-1, 2n, 2n+1} has been applied. [Projekat Ministarstva nauke Republike Srbije, br. 32009TR

  20. Optimal Filtering applied to 1998 Test Beam of Module 0

    CERN Document Server

    Camarena, F; Fullana, E

    2002-01-01

    Optimal filtering is an algorithm that allows the reconstruction of energy and time for a photomultiplier multiple sampled signal, minimazing the noise coming from electronics and Minimum Bias events. This is anticipated to be the method used in ATLAS. This note treat upon the application of optimal filtering technic to real data from test beam and the comparison with the method used until now.

  1. Structural DMR: A Technique for Implementation of Soft Error Tolerant FIR Filters

    OpenAIRE

    Reviriego, P.; Bleakley, Chris J.; Maestro, J.A.

    2011-01-01

    In this brief, an efficient technique for implementation of soft-error-tolerant finite impulse response (FIR) filters is presented. The proposed technique uses two implementations of the basic filter with different structures operating in parallel. A soft error occurring in either filter causes the outputs of the filters to differ, or mismatch, for at least one sample. The filters are specifically designed so that, when a soft error occurs, they produce distinct error patterns at the filter o...

  2. Frequency Selective Surface Bandpass Filters Applied To Thermophotovoltaic Generators

    Science.gov (United States)

    Horne, W. E.; Morgan, Mark D.; Horne, W. Paul; Sundaram, Vasan S.

    2004-11-01

    EDTEK, Inc. is developing three TPV applications, a portable diesel fueled generator for military and remote users, a hybrid solar-gas fueled power system intended for light industry and commercial 24-hour use, and a radioisotope fueled generator for deep-space spacecraft. The application of FSS bandpass filters for spectral control in these three different TPV applications has been analyzed. It has been determined that the design of the filter cannot be evaluated solely on the parameters of the filter itself. The interactions between the filter and the emitter and the TPV cells must be taken into account. In addition to the technical analysis of the converter, the overall system losses must be included in the analysis and the design requirements such as fuel efficiency, weight, generator size, cost and other factors must be included in the analysis. The analysis shows that the FSS filters are useful for producing the three systems with good efficiencies; however, different designs are required for the filters for each application.

  3. A nowcasting technique based on application of the particle filter blending algorithm

    Science.gov (United States)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  4. Filters or Holt Winters Technique to Improve the SPF Forecasts for USA Inflation Rate?

    Directory of Open Access Journals (Sweden)

    Mihaela Bratu (Simionescu

    2013-02-01

    Full Text Available In this study, transformations of SPF inflation forecasts were made in order to get moreaccurate predictions. The filters application and Holt Winters technique were chosen as possiblestrategies of improving the predictions accuracy. The quarterly inflation rate forecasts (1975 Q1-2012Q3 of USAmade by SPF were transformed using an exponential smoothing technique-HoltWinters-and these new predictions are better than the initial ones for all forecasting horizons of 4quarters. Some filters were applied to SPF forecasts (Hodrick-Prescott,Band-Pass and Christiano-Fitzegerald filters, but Holt Winters method was superior.Full sample asymmetric (Christiano-Fitzegerald and Band-Pass filtersmoothed values are more accurate than the SPF expectations onlyfor some forecast horizons.

  5. Applying well flow adapted filtering to transient pumping tests

    Science.gov (United States)

    Zech, Alraune; Attinger, Sabine

    2014-05-01

    Transient pumping tests are often used to estimate porous medium characteristics like hydraulic conductivity and storativity. The interpretation of pumping test drawdowns is based on methods which are normally developed under the assumption of homogeneous porous media. However aquifer heterogeneity strongly impacts on well flow pattern, in particular in the vicinity of the pumping well. The purpose of this work is to present a method to interpret drawdowns of transient pumping tests in heterogeneous porous media. With this method we are able to describe the effects that statistical quantities like variance and correlation length have on pumping test drawdowns. Furthermore it allows inferring on the statistical parameters of aquifer heterogeneity from drawdown data by invers estimation, which is not possible using methods for homogeneous media like Theis' solution. The method is based on a representative description of hydraulic conductivity for radial flow regimes. It is derived from a well flow adapted filtering procedure (Coarse Graining), where the heterogeneity of hydraulic conductivity is assumed to be log-normal distributed with a Gaussian correlation structure. applying the up scaled hydraulic conductivity to the groundwater flow equation results in a hydraulic head which depends on the statistical parameters of the porous medium. It describes the drawdown of a transient pumping test in heterogeneous media. We used an ensemble of transient pumping test simulations to verify the up scaled drawdown solution. We generated transient pumping tests in heterogeneous media for various values of the statistical parameters variance and correlation length and evaluated their impact on the drawdown behavior as well as on the temporal evolution. We further examined the impact of several aspects like the location of an observation well or the local conductivity at the pumping well on the drawdown behavior. This work can be understood as an expansion of the work of Zech et

  6. Repartition of oil miscible and water soluble UV filters in an applied sunscreen film determined by confocal Raman microspectroscopy.

    Science.gov (United States)

    Sohn, Myriam; Buehler, Theodor; Imanidis, Georgios

    2016-07-06

    Photoprotection provided by topical sunscreens is expressed by the sun protection factor (SPF) which depends primarily on the UV filters contained in the product and the applied sunscreen amount. Recently, the vehicle was shown to significantly impact film thickness distribution of an applied sunscreen and sunscreen efficacy. In the present work, repartition of the UV filters within the sunscreen film upon application is investigated for its role to affect sun protection efficacy. The spatial repartition of an oil-miscible and a water-soluble UV filter within the sunscreen film was studied using confocal Raman microspectroscopy. Epidermis of pig ear skin was used as substrate for application of three different sunscreen formulations, an oil-in-water emulsion, a water-in-oil emulsion, and a clear lipo-alcoholic spray (CAS) and SPF in vitro was measured. Considerable differences in the repartition of the UV filters upon application and evaporation of volatile ingredients were found between the tested formulations. A nearly continuous phase of lipid-miscible UV filter was formed only for the WO formulation with dispersed aggregates of water-soluble UV filter. OW emulsion and CAS exhibited interspersed patches of the two UV filters, whereas the segregated UV filter domains of the latter formulation were by comparison of a much larger scale and spanned the entire thickness of the sunscreen film. CAS therefore differed markedly from the other two formulations with respect to filter repartition. This difference should be reflected in SPF when the absorption spectra of the employed UV filters are not the same. Confocal Raman microspectroscopy was shown to be a powerful technique for studying this mechanism of sun protection performance of sunscreens.

  7. Applying Brainstorming Techniques to EFL Classroom

    OpenAIRE

    Toshiya, Oishi; 湘北短期大学; aPart-time Lecturer at Shohoku College

    2015-01-01

    This paper focuses on brainstorming techniques for English language learners. From the author's teaching experiences at Shohoku College during the academic year 2014-2015, the importance of brainstorming techniques was made evident. The author explored three elements of brainstorming techniques for writing using literaturereviews: lack of awareness, connecting to prior knowledge, and creativity. The literature reviews showed the advantage of using brainstorming techniques in an English compos...

  8. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    Science.gov (United States)

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  9. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    Full Text Available The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  10. Fourier-filtering techniques for the analysis of high-resolution pulsed neutron powder diffraction data

    International Nuclear Information System (INIS)

    Richardson, J.W. Jr.; Faber, J. Jr.

    1985-01-01

    Rietveld profile refinements using high-resolution pulsed neutron powder diffraction data, collected at IPNS, often reveal broad intensity contributions from sources other than the crystalline materials being studied. Such non-crystalline intensity hampers standard Rietveld refinement, and its removal and/or identification is imperative for successful refinement of the crystalline structure. A Fourier-filtering technique allows removal of the non-crystalline scattering contributions to the overall scattering pattern and yields information about the noncrystalline material. In particular, Fourier transformation of residual intensities not accounted for by the Rietveld procedure results in a real-space correlation function similar to a radial distribution function (RDF). From the inverse Fourier transform of the correlation function a Fourier-filtered fit to the diffuse scattering is obtained. This mathematical technique was applied to data for crystalline quartz, amorphous silica, and to a simulated diffraction pattern for a mixture of the two phases. 7 refs., 4 figs., 1 tab

  11. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  12. Assessment and evaluation of ceramic filter cleaning techniques: Task Order 19

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Zaharchuk, R.; Harbaugh, L.B.; Klett, M.

    1994-10-01

    The objective of this study was to assess and evaluate the effectiveness, appropriateness and economics of ceramic barrier filter cleaning techniques used for high-temperature and high-pressure particulate filtration. Three potential filter cleaning techniques were evaluated. These techniques include, conventional on-line pulse driven reverse gas filter cleaning, off-line reverse gas filter cleaning and a novel rapid pulse driven filter cleaning. These three ceramic filter cleaning techniques are either presently employed, or being considered for use, in the filtration of coal derived gas streams (combustion or gasification) under high-temperature high-pressure conditions. This study was divided into six subtasks: first principle analysis of ceramic barrier filter cleaning mechanisms; operational values for parameters identified with the filter cleaning mechanisms; evaluation and identification of potential ceramic filter cleaning techniques; development of conceptual designs for ceramic barrier filter systems and ceramic barrier filter cleaning systems for two DOE specified power plants; evaluation of ceramic barrier filter system cleaning techniques; and final report and presentation. Within individual sections of this report critical design and operational issues were evaluated and key findings were identified.

  13. Nuclear radioactive techniques applied to materials research

    CERN Document Server

    Correia, João Guilherme; Wahl, Ulrich

    2011-01-01

    In this paper we review materials characterization techniques using radioactive isotopes at the ISOLDE/CERN facility. At ISOLDE intense beams of chemically clean radioactive isotopes are provided by selective ion-sources and high-resolution isotope separators, which are coupled on-line with particle accelerators. There, new experiments are performed by an increasing number of materials researchers, which use nuclear spectroscopic techniques such as Mössbauer, Perturbed Angular Correlations (PAC), beta-NMR and Emission Channeling with short-lived isotopes not available elsewhere. Additionally, diffusion studies and traditionally non-radioactive techniques as Deep Level Transient Spectroscopy, Hall effect and Photoluminescence measurements are performed on radioactive doped samples, providing in this way the element signature upon correlation of the time dependence of the signal with the isotope transmutation half-life. Current developments, applications and perspectives of using radioactive ion beams and tech...

  14. FIR Filter Sharpening by Frequency Masking and Pipelining-Interleaving Technique

    Directory of Open Access Journals (Sweden)

    CIRIC, M. P.

    2014-11-01

    Full Text Available This paper focuses on the improvements of digital filters with a highly sharp transition zone on the Xilinx FPGA chips by combining a sharpening method based on the amplitude change function and frequency masking and PI (Pipelining-Interleaving techniques. A linear phase requires digital filter realizations with Finite Impulse Response (FIR filters. On the other hand, a drawback of FIR filters applications is a low computational efficiency, especially in applications such as filter sharpening techniques, because this technique uses processing the data by repeated passes through the same filter. Computational efficiency of FIR filters can be significantly improved by using some of the multirate techniques, and such a degree of computation savings cannot be achieved in multirate implementations of IIR (Infinite Impulse Response filters. This paper shows the realization of a filter sharpening method with FIR filters combined with frequency masking and PI (Pipelining-Interleaving technique in order to effectively realize the filter with improved characteristic. This realization at the same time keeps the good features of FIR filters such as the linear phase characteristic.

  15. Günther Tulip inferior vena cava filter retrieval using a bidirectional loop-snare technique.

    Science.gov (United States)

    Ross, Jordan; Allison, Stephen; Vaidya, Sandeep; Monroe, Eric

    2016-01-01

    Many advanced techniques have been reported in the literature for difficult Günther Tulip filter removal. This report describes a bidirectional loop-snare technique in the setting of a fibrin scar formation around the filter leg anchors. The bidirectional loop-snare technique allows for maximal axial tension and alignment for stripping fibrin scar from the filter legs, a commonly encountered complication of prolonged dwell times.

  16. Denoising of electron beam Monte Carlo dose distributions using digital filtering techniques

    International Nuclear Information System (INIS)

    Deasy, Joseph O.

    2000-01-01

    The Monte Carlo (MC) method has long been viewed as the ultimate dose distribution computational technique. The inherent stochastic dose fluctuations (i.e. noise), however, have several important disadvantages: noise will affect estimates of all the relevant dosimetric and radiobiological indices, and noise will degrade the resulting dose contour visualizations. We suggest the use of a post-processing denoising step to reduce statistical fluctuations and also improve dose contour visualization. We report the results of applying four different two-dimensional digital smoothing filters to two-dimensional dose images. The Integrated Tiger Series MC code was used to generate 10 MeV electron beam dose distributions at various depths in two different phantoms. The observed qualitative effects of filtering include: (a) the suppression of voxel-to-voxel (high-frequency) noise and (b) the resulting contour plots are visually more comprehensible. Drawbacks include, in some cases, slight blurring of penumbra near the surface and slight blurring of other very sharp real dosimetric features. Of the four digital filters considered here, one, a filter based on a local least-squares principle, appears to suppress noise with negligible degradation of real dosimetric features. We conclude that denoising of electron beam MC dose distributions is feasible and will yield improved dosimetric reliability and improved visualization of dose distributions. (author)

  17. Basic principles of applied nuclear techniques

    International Nuclear Information System (INIS)

    Basson, J.K.

    1976-01-01

    The technological applications of radioactive isotopes and radiation in South Africa have grown steadily since the first consignment of man-made radioisotopes reached this country in 1948. By the end of 1975 there were 412 authorised non-medical organisations (327 industries) using hundreds of sealed sources as well as their fair share of the thousands of radioisotope consignments, annually either imported or produced locally (mainly for medical purposes). Consequently, it is necessary for South African technologists to understand the principles of radioactivity in order to appreciate the industrial applications of nuclear techniques [af

  18. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  19. Dosimetry techniques applied to thermoluminescent age estimation

    International Nuclear Information System (INIS)

    Erramli, H.

    1986-12-01

    The reliability and the ease of the field application of the measuring techniques of natural radioactivity dosimetry are studied. The natural radioactivity in minerals in composed of the internal dose deposited by alpha and beta radiations issued from the sample itself and the external dose deposited by gamma and cosmic radiations issued from the surroundings of the sample. Two technics for external dosimetry are examined in details. TL Dosimetry and field gamma dosimetry. Calibration and experimental conditions are presented. A new integrated dosimetric method for internal and external dose measure is proposed: the TL dosimeter is placed in the soil in exactly the same conditions as the sample ones, during a time long enough for the total dose evaluation [fr

  20. Nuclear analytical techniques applied to forensic chemistry

    International Nuclear Information System (INIS)

    Nicolau, Veronica; Montoro, Silvia; Pratta, Nora; Giandomenico, Angel Di

    1999-01-01

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author)

  1. Motion Capture Technique Applied Research in Sports Technique Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhiwu LIU

    2014-09-01

    Full Text Available The motion capture technology system definition is described in the paper, and its components are researched, the key parameters are obtained from motion technique, the quantitative analysis are made on technical movements, the method of motion capture technology is proposed in sport technical diagnosis. That motion capture step includes calibration system, to attached landmarks to the tester; to capture trajectory, and to analyze the collected data.

  2. Spectral element filtering techniques for large eddy simulation with dynamic estimation

    CERN Document Server

    Blackburn, H M

    2003-01-01

    Spectral element methods have previously been successfully applied to direct numerical simulation of turbulent flows with moderate geometrical complexity and low to moderate Reynolds numbers. A natural extension of application is to large eddy simulation of turbulent flows, although there has been little published work in this area. One of the obstacles to such application is the ability to deal successfully with turbulence modelling in the presence of solid walls in arbitrary locations. An appropriate tool with which to tackle the problem is dynamic estimation of turbulence model parameters, but while this has been successfully applied to simulation of turbulent wall-bounded flows, typically in the context of spectral and finite volume methods, there have been no published applications with spectral element methods. Here, we describe approaches based on element-level spectral filtering, couple these with the dynamic procedure, and apply the techniques to large eddy simulation of a prototype wall-bounded turb...

  3. Filter assessment applied to analytical reconstruction for industrial third-generation tomography

    International Nuclear Information System (INIS)

    Velo, Alexandre F.; Martins, Joao F.T.; Oliveira, Adriano S.; Carvalho, Diego V.S.; Faria, Fernando S.; Hamada, Margarida M.; Mesquita, Carlos H.

    2015-01-01

    Multiphase systems are structures that contain a mixture of solids, liquids and gases inside a chemical reactor or pipes in a dynamic process. These systems are found in chemical, food, pharmaceutical and petrochemical industries. The gamma ray computed tomography (CT) system has been applied to visualize the distribution of multiphase systems without interrupting production. CT systems have been used to improve design, operation and troubleshooting of industrial processes. Computer tomography for multiphase processes is being developed at several laboratories. It is well known that scanning systems demand high processing time, limited set of data projections and views to obtain an image. Because of it, the image quality is dependent on the number of projection, number of detectors, acquisition time and reconstruction time. A phantom containing air, iron and aluminum was used on the third generation industrial tomography with 662 keV ( 137 Cs) radioactive source. It was applied the Filtered Back Projection algorithm to reconstruct the images. An efficient tomography is dependent of the image quality, thus the objective of this research was to apply different types of filters on the analytical algorithm and compare each other using the figure of merit denominated root mean squared error (RMSE), the filter that presents lower value of RMSE has better quality. On this research, five types of filters were used: Ram-Lak, Shepp-Logan, Cosine, Hamming and Hann filters. As results, all filters presented lower values of RMSE, that means the filters used have low stand deviation compared to the mass absorption coefficient, however, the Hann filter presented better RMSE and CNR compared to the others. (author)

  4. Applying the digital-image-correlation technique to measure the ...

    Indian Academy of Sciences (India)

    digital-image-correlation) technique is used to measure the deformation of the retrofitted column. The result shows that the DIC technique can be successfully applied to measure the relative displacement of the column. Additionally, thismethod ...

  5. Proofs and Techniques Useful for Deriving the Kalman Filter

    National Research Council Canada - National Science Library

    Koks, Don

    2008-01-01

    This note is a tutorial in matrix manipulation and the normal distribution of statistics, concepts that are important for deriving and analysing the Kalman Filter, a basic tool of signal processing...

  6. Sparse Estimation Techniques for l1 Mean and Trend Filtering

    OpenAIRE

    Johan, Ottersten

    2015-01-01

    It is often desirable to find the underlying trends in time series data. This is a wellknown signal processing problem that has many applications in areas such as financial dataanalysis, climatology, biological and medical sciences etc. Mean filtering finds a piece-wiseconstant trend in the data while trend filtering finds a piece-wise linear trend. When thesignal is noisy, the main difficulty is finding the changing points in the data. These are thepoints where the mean or the trend changes....

  7. Enhancement of Seebeck coefficient in graphene superlattices by electron filtering technique

    Science.gov (United States)

    Mishra, Shakti Kumar; Kumar, Amar; Kaushik, Chetan Prakash; Dikshit, Biswaranjan

    2018-01-01

    We show theoretically that the Seebeck coefficient and the thermoelectric figure of merit can be increased by using electron filtering technique in graphene superlattice based thermoelectric devices. The average Seebeck coefficient for graphene-based thermoelectric devices is proportional to the integral of the distribution of Seebeck coefficient versus energy of electrons. The low energy electrons in the distribution curve are found to reduce the average Seebeck coefficient as their contribution is negative. We show that, with electron energy filtering technique using multiple graphene superlattice heterostructures, the low energy electrons can be filtered out and the Seebeck coefficient can be increased. The multiple graphene superlattice heterostructures can be formed by graphene superlattices with different periodic electric potentials applied above the superlattice. The overall electronic band gap of the multiple heterostructures is dependent upon the individual band gap of the graphene superlattices and can be tuned by varying the periodic electric potentials. The overall electronic band gap of the multiple heterostructures has to be properly chosen such that, the low energy electrons which cause negative Seebeck distribution in single graphene superlattice thermoelectric devices fall within the overall band gap formed by the multiple heterostructures. Although the electrical conductance is decreased in this technique reducing the thermoelectric figure of merit, the overall figure of merit is increased due to huge increase in Seebeck coefficient and its square dependency upon the Seebeck coefficient. This is an easy technique to make graphene superlattice based thermoelectric devices more efficient and has the potential to significantly improve the technology of energy harvesting and sensors.

  8. Comparison of edge detection techniques for M7 subtype Leukemic cell in terms of noise filters and threshold value

    Directory of Open Access Journals (Sweden)

    Abdul Salam Afifah Salmi

    2017-01-01

    Full Text Available This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection. Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.

  9. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  10. Nonlinear Filtering Techniques Comparison for Battery State Estimation

    Directory of Open Access Journals (Sweden)

    Aspasia Papazoglou

    2014-09-01

    Full Text Available The performance of estimation algorithms is vital for the correct functioning of batteries in electric vehicles, as poor estimates will inevitably jeopardize the operations that rely on un-measurable quantities, such as State of Charge and State of Health. This paper compares the performance of three nonlinear estimation algorithms: the Extended Kalman Filter, the Unscented Kalman Filter and the Particle Filter, where a lithium-ion cell model is considered. The effectiveness of these algorithms is measured by their ability to produce accurate estimates against their computational complexity in terms of number of operations and execution time required. The trade-offs between estimators' performance and their computational complexity are analyzed.

  11. Cointegration versus traditional econometric techniques in applied economics

    OpenAIRE

    Joachim Zietz

    2000-01-01

    The paper illustrates some of the well-known problems with cointegration analysis in order to provide some perspective on the usefulness of cointegration techniques in applied economics. A number of numerical examples are employed to compare econometric estimation on the basis of both traditional autoregressive distributed lag models and currently popular cointegration techniques. The results suggest that, first, cointegration techniques need to be applied with great care and that, second, th...

  12. B-spline design of digital FIR filter using evolutionary computation techniques

    Science.gov (United States)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  13. Assessment of Snared-Loop Technique When Standard Retrieval of Inferior Vena Cava Filters Fails

    International Nuclear Information System (INIS)

    Doody, Orla; Noe, Geertje; Given, Mark F.; Foley, Peter T.; Lyon, Stuart M.

    2009-01-01

    Purpose To identify the success and complications related to a variant technique used to retrieve inferior vena cava filters when simple snare approach has failed. Methods A retrospective review of all Cook Guenther Tulip filters and Cook Celect filters retrieved between July 2006 and February 2008 was performed. During this period, 130 filter retrievals were attempted. In 33 cases, the standard retrieval technique failed. Retrieval was subsequently attempted with our modified retrieval technique. Results The retrieval was successful in 23 cases (mean dwell time, 171.84 days; range, 5-505 days) and unsuccessful in 10 cases (mean dwell time, 162.2 days; range, 94-360 days). Our filter retrievability rates increased from 74.6% with the standard retrieval method to 92.3% when the snared-loop technique was used. Unsuccessful retrieval was due to significant endothelialization (n = 9) and caval penetration by the filter (n = 1). A single complication occurred in the group, in a patient developing pulmonary emboli after attempted retrieval. Conclusion The technique we describe increased the retrievability of the two filters studied. Hook endothelialization is the main factor resulting in failed retrieval and continues to be a limitation with these filters.

  14. Lateral spreading of topically applied UV filter substances investigated by tape stripping.

    Science.gov (United States)

    Jacobi, U; Weigmann, H-J; Baumann, M; Reiche, A-I; Sterry, W; Lademann, J

    2004-01-01

    The lateral spreading of topically applied substances is a competitive process to the penetration into the stratum corneum (SC). The penetration of topically applied UV filter substances into the human SC and the lateral spreading were investigated in vivo. Tape stripping in combination with spectroscopic measurements was used to study both processes of two UV filter substances. The concentration of both UV filters was determined inside and outside the application area by varying the application and tape stripping protocol. A spreading of the topically applied substances from the treated to the untreated areas was observed, which caused a concentration gradient. This lateral spreading depends on the time between application and tape stripping and the size of the treated skin area. Significant amounts of topically applied substances were found adjoining the application area, due to the lateral spreading which takes place on the skin surface. In general, the lateral spreading must be considered to be a competitive process when studying penetration processes of topically applied substances. It has to be considered during drug treatment of small limited skin areas and for the interpretation of recovery rates obtained in penetration studies. Copyright 2004 S. Karger AG, Basel

  15. Post-correlation filtering techniques for off-axis source and RFI removal

    NARCIS (Netherlands)

    Offringa, A. R.; de Bruyn, A. G.; Zaroubi, S.

    Techniques to improve the data quality of interferometric radio observations are considered. Fundaments of fringe frequencies in the uv-plane are discussed and filters are used to attenuate radio-frequency interference (RFI) and off-axis sources. Several new applications of filters are introduced

  16. Recleaning of HEPA filters by reverse flow - evaluation of the underlying processes and the cleaning technique

    International Nuclear Information System (INIS)

    Leibold, H.; Leiber, T.; Doeffert, I.; Wilhelm, J.G.

    1993-08-01

    HEPA filter operation at high concentrations of fine dusts requires the periodic recleaning of the filter units in their service locations. Due to the low mechanical stress induced during the recleaning process the regenration via low pressure reverse flow is a very suitable technique. Recleanability of HEPA filter had been attained for particle diameter >0,4 μm at air velocities up to 1 m/s, but filter clogging occurred in case of smaller particles. The recleaning forces are too weak for particles [de

  17. Stability Study of Filtering Techniques in Pictures of mini-MIAS Database; Estudio de Estabilidad de Tecnicas de Filtrado en Imagenes de la Base de Datos mini-MIAS

    Energy Technology Data Exchange (ETDEWEB)

    Parcero, E.; Vidal, V.; Verdu, G.; Mayo, P.

    2014-07-01

    The study of filtering techniques applied to medical imaging is particularly important because it can be decisive for an accurate diagnosis. This work aims to study the stability of Fuzzy Peer Group Averaging filter when applied to mammographic images of different nature in relation to the type of tissue abnormality found and diagnosis. The results show that the filter is effective, because obtained a PSNR value of 27 by comparing the filtered image with the original, and a value of 17 by comparing the filtered image with contaminated with noise. Also show that the filter will behave properly regardless of the image characteristics. (Author)

  18. Chemical vapor deposition: A technique for applying protective coatings

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, T.C. Sr.; Bowman, M.G.

    1979-01-01

    Chemical vapor deposition is discussed as a technique for applying coatings for materials protection in energy systems. The fundamentals of the process are emphasized in order to establish a basis for understanding the relative advantages and limitations of the technique. Several examples of the successful application of CVD coating are described. 31 refs., and 18 figs.

  19. Evaluating factorial kriging for seismic attributes filtering: a geostatistical filter applied to reservoir characterization; Avaliacao da krigagem fatorial na filtragem de atributos sismicos: um filtro geoestatistico aplicado a caracterizacao de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Mundim, Evaldo Cesario

    1999-02-01

    In this dissertation the Factorial Kriging analysis for the filtering of seismic attributes applied to reservoir characterization is considered. Factorial Kriging works in the spatial, domain in a similar way to the Spectral Analysis in the frequency domain. The incorporation of filtered attributes via External Drift Kriging and Collocated Cokriging in the estimate of reservoir characterization is discussed. Its relevance for the reservoir porous volume calculation is also evaluated based on comparative analysis of the volume risk curves derived from stochastic conditional simulations with collocated variable and stochastic conditional simulations with collocated variable and stochastic conditional simulations with external drift. results prove Factorial Kriging as an efficient technique for the filtering of seismic attributes images, of which geologic features are enhanced. The attribute filtering improves the correlation between the attributes and the well data and the estimates of the reservoir properties. The differences between the estimates obtained by External Drift Kriging and Collocated Cokriging are also reduced. (author)

  20. Preliminary study of an angiographic and angio-tomographic technique based on K-edge filters

    Energy Technology Data Exchange (ETDEWEB)

    Golosio, Bruno; Brunetti, Antonio [Dipartimento POLCOMING, Istituto di Matematica e Fisica, Università di Sassari, 07100 Sassari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari (Italy); Oliva, Piernicola; Carpinelli, Massimo [Dipartimento di Chimica e Farmacia, Università di Sassari, 07100 Sassari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari (Italy); Luca Masala, Giovanni [Dipartimento di Chimica e Farmacia, Università di Sassari, 07100 Sassari (Italy); Meloni, Francesco [Unità operativa di Diagnostica per immagini Asl n. 1, Ospedale Civile SS Annunziata, 07100 Sassari (Italy); Battista Meloni, Giovanni [Istituto di Scienze Radiologiche, Università di Sassari, 07100 Sassari (Italy)

    2013-08-14

    Digital Subtraction Angiography is commonly affected by artifacts due to the patient movements during the acquisition of the images without and with the contrast medium. This paper presents a preliminary study on an angiographic and angio-tomographic technique based on the quasi-simultaneous acquisition of two images, obtained using two different filters at the exit of an X-ray tube. One of the two filters (K-edge filter) contains the same chemical element used as a contrast agent (gadolinium in this study). This filter absorbs more radiation with energy just above the so called K-edge energy of gadolinium than the radiation with energy just below it. The other filter (an aluminium filter in this study) is simply used to suppress the low-energy contribution to the spectrum. Using proper calibration curves, the two images are combined to obtain an image of the contrast agent distribution. In the angio-tomographic application of the proposed technique two images, corresponding to the two filter types, are acquired for each viewing angle of the tomographic scan. From the two tomographic reconstructions, it is possible to obtain a three-dimensional map of the contrast agent distribution. The technique was tested on a sample consisting of a rat skull placed inside a container filled with water. Six small cylinders with 4.7 mm internal diameter containing the contrast medium at different concentrations were placed inside the skull. In the plain angiographic application of the technique, five out of six cylinders were visible, with gadolinium concentration down to 0.96%. In the angio-tomographic application, all six cylinders were visible, with gadolinium concentration down to 0.49%. This preliminary study shows that the proposed technique can provide images of the contrast medium at low concentration without most of the artifacts that are present in images produced by conventional techniques. The results encourage further investigation on the feasibility of a clinical

  1. Investigation of different types of filters for atmospheric trace elements analysis by three analytical techniques

    International Nuclear Information System (INIS)

    Ali, A.E.; Bacso, J.

    1996-01-01

    Different atmospheric aerosol samples were collected on three types of filters. Disks of both loaded and clean areas of each kind of filter were investigated by XRF, PIXE and Scanning Electron Microscope (SEM) methods. The blank concentration values of the elements Al, Si, P, S, Cl, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Br and Pb in the three types of filters are discussed. It is found that for trace elemental analysis, the Nuclepore membrane filters are the most suitable for sampling. These have much lower blank element concentration values than the glass fibres and ash free filters. It was found also that the PIXE method is a more reliable analytical technique for atmospheric aerosol particles than the other methods. (author). 20 refs., 3 figs., 3 tabs

  2. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    Science.gov (United States)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  3. Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2011-01-01

    Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

  4. Evaluation of multichannel Wiener filters applied to fine resolution passive microwave images of first-year sea ice

    Science.gov (United States)

    Full, William E.; Eppler, Duane T.

    1993-01-01

    The effectivity of multichannel Wiener filters to improve images obtained with passive microwave systems was investigated by applying Wiener filters to passive microwave images of first-year sea ice. Four major parameters which define the filter were varied: the lag or pixel offset between the original and the desired scenes, filter length, the number of lines in the filter, and the weight applied to the empirical correlation functions. The effect of each variable on the image quality was assessed by visually comparing the results. It was found that the application of multichannel Wiener theory to passive microwave images of first-year sea ice resulted in visually sharper images with enhanced textural features and less high-frequency noise. However, Wiener filters induced a slight blocky grain to the image and could produce a type of ringing along scan lines traversing sharp intensity contrasts.

  5. Cosine Modulated and Offset QAM Filter Bank Multicarrier Techniques: A Continuous-Time Prospect

    Directory of Open Access Journals (Sweden)

    Farhang-Boroujeny Behrouz

    2010-01-01

    Full Text Available Abstract Prior to the discovery of the celebrated orthogonal frequency division multiplexing (OFDM, multicarrier techniques that use analog filter banks were introduced in the 1960s. Moreover, advancements in the design of perfect reconstruction filter banks have led to a number developments in the design of prototype digital filters and polyphase structures for efficient implementations of the filter bank multicarrier (FBMC systems. The main thrust of this paper is to present a tutorial review of the classical works on FBMC systems and show that some of the more recent developments are, in fact, reinventions of multicarrier techniques that have been developed prior of the era of OFDM. We also review the recent novel developments in the design of FBMC systems that are tuned to cope with fast fading wireless channels.

  6. Cosine Modulated and Offset QAM Filter Bank Multicarrier Techniques: A Continuous-Time Prospect

    Directory of Open Access Journals (Sweden)

    Behrouz Farhang-Boroujeny

    2010-01-01

    Full Text Available Prior to the discovery of the celebrated orthogonal frequency division multiplexing (OFDM, multicarrier techniques that use analog filter banks were introduced in the 1960s. Moreover, advancements in the design of perfect reconstruction filter banks have led to a number developments in the design of prototype digital filters and polyphase structures for efficient implementations of the filter bank multicarrier (FBMC systems. The main thrust of this paper is to present a tutorial review of the classical works on FBMC systems and show that some of the more recent developments are, in fact, reinventions of multicarrier techniques that have been developed prior of the era of OFDM. We also review the recent novel developments in the design of FBMC systems that are tuned to cope with fast fading wireless channels.

  7. Theoretical analysis of highly linear tunable filters using Switched-Resistor techniques

    NARCIS (Netherlands)

    Jiraseree-amornkun, Amorn; Jiraseree-Amornkun, A.; Worapishet, Apisak; Klumperink, Eric A.M.; Nauta, Bram; Surakampontorn, Wanlop

    2008-01-01

    Abstract—In this paper, an in-depth analysis of switched-resistor (S-R) techniques for implementing low-voltage low-distortion tunable active-RC filters is presented. The S-R techniques make use of switch(es) with duty-cycle-controlled clock(s) to achieve tunability of the effective resistance and,

  8. Applying the digital-image-correlation technique to measure the ...

    Indian Academy of Sciences (India)

    It has been applied for analysing various structural problems. For exam- ple, French scholars Raffard et ... observe the crack development in masonry wall. One major advantage of DIC technique ... based on the characteristic gray-scale distributions in the image of the structural speckle on the specimen surface. As shown in ...

  9. Applying DEA Technique to Library Evaluation in Academic Research Libraries.

    Science.gov (United States)

    Shim, Wonsik

    2003-01-01

    This study applied an analytical technique called Data Envelopment Analysis (DEA) to calculate the relative technical efficiency of 95 academic research libraries, all members of the Association of Research Libraries. DEA, with the proper model of library inputs and outputs, can reveal best practices in the peer groups, as well as the technical…

  10. Fuzzy Control Technique Applied to Modified Mathematical Model ...

    African Journals Online (AJOL)

    In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...

  11. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    Science.gov (United States)

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  12. Adaptive Local Iterative Filtering: A Promising Technique for the Analysis of Nonstationary Signals

    Science.gov (United States)

    Piersanti, M.; Materassi, M.; Cicone, A.; Spogli, L.; Zhou, H.; Ezquer, R. G.

    2018-01-01

    Many real-life signals and, in particular, in the space physics domain, exhibit variations across different temporal scales. Hence, their statistical momenta may depend on the time scale at which the signal is studied. To identify and quantify such variations, a time-frequency analysis has to be performed on these signals. The dependence of the statistical properties of a signal fluctuation on the space and time scales is the distinctive character of systems with nonlinear couplings among different modes. Hence, assessing how the statistics of signal fluctuations vary with scale will be of help in understanding the corresponding multiscale statistics of such dynamics. This paper presents a new multiscale data analysis technique, the adaptive local iterative filtering (ALIF), which allows to describe the multiscale nature of the geophysical signal studied better than via Fourier transform, and improves scale resolution with respect to discrete wavelet transform. The example of geophysical signal, to which ALIF has been applied, is ionospheric radio power scintillation on L band. ALIF appears to be a promising technique to study the small-scale structures of radio scintillation due to ionospheric turbulence.

  13. Simulation model of harmonics reduction technique using shunt active filter by cascade multilevel inverter method

    Science.gov (United States)

    Andreh, Angga Muhamad; Subiyanto, Sunardiyo, Said

    2017-01-01

    Development of non-linear loading in the application of industry and distribution system and also harmonic compensation becomes important. Harmonic pollution is an urgent problem in increasing power quality. The main contribution of the study is the modeling approach used to design a shunt active filter and the application of the cascade multilevel inverter topology to improve the power quality of electrical energy. In this study, shunt active filter was aimed to eliminate dominant harmonic component by injecting opposite currents with the harmonic component system. The active filter was designed by shunt configuration with cascaded multilevel inverter method controlled by PID controller and SPWM. With this shunt active filter, the harmonic current can be reduced so that the current wave pattern of the source is approximately sinusoidal. Design and simulation were conducted by using Power Simulator (PSIM) software. Shunt active filter performance experiment was conducted on the IEEE four bus test system. The result of shunt active filter installation on the system (IEEE four bus) could reduce THD current from 28.68% to 3.09%. With this result, the active filter can be applied as an effective method to reduce harmonics.

  14. On fractional filtering versus conventional filtering in economics

    Science.gov (United States)

    Nigmatullin, Raoul R.; Omay, Tolga; Baleanu, Dumitru

    2010-04-01

    In this study, we compare the Hodrick-Prescott Filter technique with the Fractional filtering technique that has recently started to be used in various applied sciences like physics, engineering, and biology. We apply these filtering techniques to quarterly GDP data from Turkey for the period 1988:1-2003:2. The filtered series are analyzed using Minimum Square Error (MSE) and real life evidence. In the second part of the study, we use simulated data to analyze the statistical properties of the aforementioned filtering techniques.

  15. Neutron tomography of particulate filters: a non-destructive investigation tool for applied and industrial research

    Science.gov (United States)

    Toops, Todd J.; Bilheux, Hassina Z.; Voisin, Sophie; Gregor, Jens; Walker, Lakeisha; Strzelec, Andrea; Finney, Charles E. A.; Pihl, Josh A.

    2013-11-01

    This research describes the development and implementation of high-fidelity neutron imaging and the associated analysis of the images. This advanced capability allows the non-destructive, non-invasive imaging of particulate filters (PFs) and how the deposition of particulate and catalytic washcoat occurs within the filter. The majority of the efforts described here were performed at the High Flux Isotope Reactor (HFIR) CG-1D neutron imaging beamline at Oak Ridge National Laboratory; the current spatial resolution is approximately 50 μm. The sample holder is equipped with a high-precision rotation stage that allows 3D imaging (i.e., computed tomography) of the sample when combined with computerized reconstruction tools. What enables the neutron-based image is the ability of some elements to absorb or scatter neutrons where other elements allow the neutron to pass through them with negligible interaction. Of particular interest in this study is the scattering of neutrons by hydrogen-containing molecules, such as hydrocarbons (HCs) and/or water, which are adsorbed to the surface of soot, ash and catalytic washcoat. Even so, the interactions with this adsorbed water/HC is low and computational techniques were required to enhance the contrast, primarily a modified simultaneous iterative reconstruction technique (SIRT). This effort describes the following systems: particulate randomly distributed in a PF, ash deposition in PFs, a catalyzed washcoat layer in a PF, and three particulate loadings in a SiC PF.

  16. Comparison between 3D dynamics filter technique, field-in-field, electronic compensator in breast cancer

    International Nuclear Information System (INIS)

    Trindade, Cassia; Silva, Leonardo P.; Martins, Lais P.; Garcia, Paulo L.; Santos, Maira R.; Bastista, Delano V.S.; Vieira, Anna Myrian M.T.L.; Rocha, Igor M.

    2012-01-01

    The radiotherapy has been used in a wild scale in breast cancer treatment. With this high demand, new technologies have been developed to improve the dose distribution in the target while reducing the dose delivered in critical organs. In this study, performed with one clinical case, three planning were done for comparison: 3D technique with dynamic filter, 3D with field-in-field technique (forward-planned IMRT) and 3D technique using electronic compensator (ECOMP). The planning were done with a 6MV photon beam using the Eclipse software, version 8.6 (Varian Medical Systems). The PTV was drawn covering the whole breast and the critical organs were: the lung on the irradiated side, the heart, the contralateral breast and the anterior descending coronary artery (LAD). The planning using the compensator technique permitted more homogeneous dose distribution in the target volume. The V20 value of the lung on the irradiated side was 8,3% for the electronic compensator technique, 8,9% for the field-in-field technique and 8,2% for the dynamic filter technique. For the heart the dose range was 15.7 - 139.9 cGy, 16.3 - 148.4 cGy for the dynamic filter technique and 19.6 - 157.0 cGy for the field-in-field technique. The dose gradient was 11% with compensator electronic, 15% dynamic filter technique and 13% with field-in-field. The application of electronic technique in breast cancer treatment allows better dose distribution while reduces dose in critical organs, but in the same time requires a quality assurance. (author)

  17. A NOVEL TECHNIQUE APPLYING SPECTRAL ESTIMATION TO JOHNSON NOISE THERMOMETRY

    Energy Technology Data Exchange (ETDEWEB)

    Ezell, N Dianne Bull [ORNL; Britton Jr, Charles L [ORNL; Roberts, Michael [ORNL; Holcomb, David Eugene [ORNL; Ericson, Milton Nance [ORNL; Djouadi, Seddik M [ORNL; Wood, Richard Thomas [ORNL

    2017-01-01

    Johnson noise thermometry (JNT) is one of many important measurements used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed in this document. Spectral estimation is a key component in the signal processing algorithm utilized for EMI removal and temperature calculation. Applying these techniques requires the simple addition of the electronics and signal processing to existing resistive thermometers.

  18. The digital geometric phase technique applied to the deformation evaluation of MEMS devices

    International Nuclear Information System (INIS)

    Liu, Z W; Xie, H M; Gu, C Z; Meng, Y G

    2009-01-01

    Quantitative evaluation of the structure deformation of microfabricated electromechanical systems is of importance for the design and functional control of microsystems. In this investigation, a novel digital geometric phase technique was developed to meet the deformation evaluation requirement of microelectromechanical systems (MEMS). The technique is performed on the basis of regular artificial lattices, instead of a natural atom lattice. The regular artificial lattices with a pitch ranging from micrometer to nanometer will be directly fabricated on the measured surface of MEMS devices by using a focused ion beam (FIB). Phase information can be obtained from the Bragg filtered images after fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT) of the scanning electronic microscope (SEM) images. Then the in-plane displacement field and the local strain field related to the phase information will be evaluated. The obtained results show that the technique can be well applied to deformation measurement with nanometer sensitivity and stiction force estimation of a MEMS device

  19. A novel consistent photomicrography technique using a reference slide made of neutral density filter.

    Science.gov (United States)

    Kim, Jong Yeob; Kim, Ji Woong; Seo, Soo Hong; Kye, Young Chul; Ahn, Hyo Hyun

    2011-05-01

    Obtaining consistent photomicrographic images of pathology slides is not always easy because of many different types and settings of the equipment such as the microscopes and digital cameras. In this study, we developed a photomicrography technique that could acquire consistent images of pathology slides. The neutral density (ND) filter was attached to a transparent glass slide as a reference slide, photographed using consistent settings, and acquired images that harbored all of the areas of gray, white, and black. In the same way, the slide was replaced by the actual pathology slide, and photomicrographed. To simulate different light environment, the above photographic session was repeated using two different light intensities and microscopes. A graphic program was used to adjust levels of the reference slide images and this leveling, or calibration, was saved as a file for each. This file for leveling process was applied to actual subsequent photomicrographic images. The same sites of noncalibrated and calibrated images of the pathology slide were calculated into CIELAB or CIE L*a*b* coordinates. Then, the color differences (ΔE*ab) were calculated. As results, in the study using a 50% transmittance ND filter, two original different images were made nearly identical to the unaided eye, especially in two-point (white and gray) and three-point (black, white, and gray) leveling. In comparison of different light intensities, the ΔE*ab of the selected area was 0.9 in two-point leveling. Between different microscopes, 10.7 of ΔE*ab was the smallest value in three-point leveling. This method would be helpful for acquiring consistent photomicrographic images of pathology slides. Copyright © 2010 Wiley-Liss, Inc.

  20. A technique using high-flow, dichotomous filter packs for measuring major atmospheric chemical constituents

    Science.gov (United States)

    Bardwell, C. A.; Maben, J. R.; Hurt, J. A.; Keene, W. C.; Galloway, J. N.; Boatman, J. F.; Wellman, D. L.

    1990-06-01

    We developed a high-resolution technique to measure major reactive trace gases and the chemical composition of size-segregated aerosols in the troposphere as part of the 1988 Global Change Expedition/Coordinated Air-Sea Experiment/Western Atlantic Ocean Experiment. We sampled air over the western North Atlantic Ocean from the NOAA King Air research aircraft and NOAA ship Mt. Mitchell during July. Our system used filter packs containing an upstream, 90-mm quartz filter to collect particles followed by two 90-mm rayon filters impregnated with 10% K2C03-10% glycerol to collect alkaline reactive gases. Paired filter packs were exposed when the aircraft sampled the boundary layer. An upstream cyclone with a 50% aerodynamic cut radius of approximately 0.4 μm removed large particles from one of the filter-pack inlets. Air was sampled at an average rate of 0.12 m3 STP min-1 for the fine filter packs and 0.26 m3 STP min-1 for the total over intervals of 45 min to 90 min. Particulate-phase concentrations of major anions (SO42-, CH3SO3-, NO3-, Cl-) and organic species (HCOOt [HCOO- and HCOOH] and CH3COOt [CH3COO- and CH3COOH]) were measured by gradient elution ion chromatography; base cations (Ca2+, Mg2+, Na+, K+) by atomic-absorption spectroscopy; NH4+ by automated colorimetry; and H+ by glass electrode. We quantified SO2, HNO3, and HCl using two isocratic ion chromatography methods. This technique provided higher signal-to-noise ratios allowing increased temporal and spatial resolution, pH determination of particulate-phase filter extracts, and measurement of HCl on gas-phase filters.

  1. Applying of USB interface technique in nuclear spectrum acquisition system

    International Nuclear Information System (INIS)

    Zhou Jianbin; Huang Jinhua

    2004-01-01

    This paper introduces applying of USB technique and constructing nuclear spectrum acquisition system via PC's USB interface. The authors choose the USB component USB100 module and the W77E58μc to do the key work. It's easy to apply USB interface technique, when USB100 module is used. USB100 module can be treated as a common I/O component for the μc controller, and can be treated as a communication interface (COM) when connected to PC' USB interface. It's easy to modify the PC's program for the new system with USB100 module. The authors can smoothly change from ISA, RS232 bus to USB bus. (authors)

  2. Applying traditional signal processing techniques to social media exploitation for situational understanding

    Science.gov (United States)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  3. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    Science.gov (United States)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  4. Technique applied in electrical power distribution for Satellite Launch Vehicle

    Directory of Open Access Journals (Sweden)

    João Maurício Rosário

    2010-09-01

    Full Text Available The Satellite Launch Vehicle electrical network, which is currently being developed in Brazil, is sub-divided for analysis in the following parts: Service Electrical Network, Controlling Electrical Network, Safety Electrical Network and Telemetry Electrical Network. During the pre-launching and launching phases, these electrical networks are associated electrically and mechanically to the structure of the vehicle. In order to succeed in the integration of these electrical networks it is necessary to employ techniques of electrical power distribution, which are proper to Launch Vehicle systems. This work presents the most important techniques to be considered in the characterization of the electrical power supply applied to Launch Vehicle systems. Such techniques are primarily designed to allow the electrical networks, when submitted to the single-phase fault to ground, to be able of keeping the power supply to the loads.

  5. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  6. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  7. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    International Nuclear Information System (INIS)

    Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement

  8. Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique

    Science.gov (United States)

    Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen

    2015-01-01

    Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.

  9. A MECHANISTIC MODEL FOR PARTICLE DEPOSITION IN DIESEL PARTICLUATE FILTERS USING THE LATTICE-BOLTZMANN TECHNIQUE

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Mark L.; Rector, David R.; Muntean, George G.; Maupin, Gary D.

    2004-08-01

    Cordierite diesel particulate filters (DPFs) offer one of the most promising aftertreatment technologies to meet the quickly approaching EPA 2007 heavy-duty emissions regulations. A critical, yet poorly understood, component of particulate filter modeling is the representation of soot deposition. The structure and distribution of soot deposits upon and within the ceramic substrate directly influence many of the macroscopic phenomenon of interest, including filtration efficiency, back pressure, and filter regeneration. Intrinsic soot cake properties such as packing density and permeability coefficients remain inadequately characterized. The work reported in this paper involves subgrid modeling techniques which may prove useful in resolving these inadequacies. The technique involves the use of a lattice Boltzmann modeling approach. This approach resolves length scales which are orders of magnitude below those typical of a standard computational fluid dynamics (CFD) representation of an aftertreatment device. Individual soot particles are introduced and tracked as they move through the flow field and are deposited on the filter substrate or previously deposited particles. Electron micrographs of actual soot deposits were taken and compared to the model predictions. Descriptions of the modeling technique and the development of the computational domain are provided. Preliminary results are presented, along with some comparisons with experimental observations.

  10. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  11. A goal-oriented field measurement filtering technique for the identification of material model parameters

    KAUST Repository

    Lubineau, Gilles

    2009-05-16

    The post-processing of experiments with nonuniform fields is still a challenge: the information is often much richer, but its interpretation for identification purposes is not straightforward. However, this is a very promising field of development because it would pave the way for the robust identification of multiple material parameters using only a small number of experiments. This paper presents a goal-oriented filtering technique in which data are combined into new output fields which are strongly correlated with specific quantities of interest (the material parameters to be identified). Thus, this combination, which is nonuniform in space, constitutes a filter of the experimental outputs, whose relevance is quantified by a quality function based on global variance analysis. Then, this filter is optimized using genetic algorithms. © 2009 Springer-Verlag.

  12. A Generic Current Mode Design for Multifunction Grounded Capacitor Filters Employing Log-Domain Technique

    Directory of Open Access Journals (Sweden)

    N. A. Shah

    2011-01-01

    Full Text Available A generic design (GD for realizing an nth order log-domain multifunction filter (MFF, which can yield four possible stable filter configurations, each offering simultaneously lowpass (LP, highpass (HP, and bandpass (BP frequency responses, is presented. The features of these filters are very simple, consisting of merely a few exponential transconductor cells and capacitors; all grounded elements, capable of absorbing the shunt parasitic capacitances, responses are electronically tuneable, and suitable for monolithic integration. Furthermore, being designed using log-domain technique, it offers all its advantages. As an example, 5th-order MFFs are designed in each case and their performances are evaluated through simulation. Lastly, a comparative study of the MFFs is also carried, which helps in selecting better high-order MFF for a given application.

  13. Applying field mapping refractive beam shapers to improve holographic techniques

    Science.gov (United States)

    Laskin, Alexander; Williams, Gavin; McWilliam, Richard; Laskin, Vadim

    2012-03-01

    Performance of various holographic techniques can be essentially improved by homogenizing the intensity profile of the laser beam with using beam shaping optics, for example, the achromatic field mapping refractive beam shapers like πShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flattop one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with several laser sources with different wavelengths simultaneously. Applying of these beam shapers brings serious benefits to the Spatial Light Modulator based techniques like Computer Generated Holography or Dot-Matrix mastering of security holograms since uniform illumination of an SLM allows simplifying mathematical calculations and increasing predictability and reliability of the imaging results. Another example is multicolour Denisyuk holography when the achromatic πShaper provides uniform illumination of a field at various wavelengths simultaneously. This paper will describe some design basics of the field mapping refractive beam shapers and optical layouts of their applying in holographic systems. Examples of real implementations and experimental results will be presented as well.

  14. A filter technique for optimising the photon energy response of a silicon pin diode dosemeter

    International Nuclear Information System (INIS)

    Olsher, R.H.; Eisen, Y.

    1996-01-01

    Unless they are energy compensated, silicon PIN diodes used in electronic pocket dosemeters, have significant over-response below 200 keV. Siemens is using three diodes in parallel with individual filters to produce excellent energy and angular response. An algorithm based on the photon spectrum of a single diode could be used to flatten the energy response. The commercial practice is to use a single diode with a simple filter to flatten the energy response, despite the mediocre low energy photon. The filter technique with an opening has been used for energy compensating GM detectors and proportional counters and a new variation of it has been investigated which compensates the energy response of a silicon PIN diode and maintains an extended low energy response. It uses a composite filter of two or more materials with several openings whose individual area is in the range of 15% to 25% of the diode's active area. One opening is centred over the diode's active area and others are located at the periphery of the active area to preserve a good polar response to ±45 o . Monte Carlo radiation transport methods were used to simulate the coupled electron-photon transport through a Hamamatsu S2506-01 diode and to determine the energy response of the diode for a variety of filters. In current mode, the resultant dosemeter energy response relative to air dose was within -15% and +30% for 0 o incidence over the energy range from 15 keV to 1 MeV. In pulse mode, the resultant dosemeter energy response was within -25% and +50% for 0 o incidence over the energy range from 30 keV to 10 MeV. For ±45 o incidence, the energy response was within -25% and +40% from 40 keV to 10 MeV. Theoretical viability of the filter technique has been shown in this work (Author)

  15. Performance of swarm based optimization techniques for designing digital FIR filter: A comparative study

    Directory of Open Access Journals (Sweden)

    I. Sharma

    2016-09-01

    Full Text Available In this paper, a linear phase FIR filter is designed through recently proposed nature inspired optimization algorithm known as Cuckoo search (CS. A comparative study of Cuckoo search (CS, particle swarm optimization (PSO and artificial bee colony (ABC nature inspired optimization methods in the field of linear phase FIR filter design is also presented. For this purpose, an improved L1 weighted error function is formulated in frequency domain, and minimized through CS, PSO and ABC respectively. The error or objective function has a controlling parameter wt which controls the amount of ripple in the desired band of frequency. The performance of FIR filter is examined through three key parameters; Maximum Pass Band Ripple (MPR, Maximum Stopband Ripple (MSR and Stopband Attenuation (As. Comparative study and the simulation results reveal that the designed filter with CS gives better performance in terms of Maximum Stopband Ripple (MSR, and Stopband Attenuation (As for low order filter design, and for higher order it also gives better performance in term of Maximum Passband Ripple (MPR. Superiority of the proposed technique is also shown through comparison with other recently proposed methods.

  16. Tunable coherence-free microwave photonic bandpass filter based on double cross gain modulation technique.

    Science.gov (United States)

    Chan, Erwin H W

    2012-10-08

    A tunable, coherence-free, high-resolution microwave photonic bandpass filter, which is compatible to be inserted in a conventional fiber optic link, is presented. It is based on using two cross gain modulation based wavelength converters in a recursive loop. The double cross gain modulation technique solves the semiconductor optical amplifier facet reflection problem in the conventional recursive structure; hence the new microwave photonic signal processor has no coherent interference and no phase-induced intensity noise. It allows arbitrary narrow-linewidth telecommunication-type lasers to be used while enabling stable filter operation to be realized. The filter passband frequency can be tuned by using a wavelength tunable laser and a wavelength dependent time delay component. Experimental results demonstrate robust high-resolution bandpass filter operation with narrow-linewidth sources, no phase-induced intensity noise and a high signal-to-noise ratio performance. Tunable coherence-free operation of the high-resolution bandpass filter is also demonstrated.

  17. Archaeometry: nuclear and conventional techniques applied to the archaeological research

    International Nuclear Information System (INIS)

    Esparza L, R.; Cardenas G, E.

    2005-01-01

    The book that now is presented is formed by twelve articles that approach from different perspective topics as the archaeological prospecting, the analysis of the pre hispanic and colonial ceramic, the obsidian and the mural painting, besides dating and questions about the data ordaining. Following the chronological order in which the exploration techniques and laboratory studies are required, there are presented in the first place the texts about the systematic and detailed study of the archaeological sites, later we pass to relative topics to the application of diverse nuclear techniques as PIXE, RBS, XRD, NAA, SEM, Moessbauer spectroscopy and other conventional techniques. The multidisciplinary is an aspect that highlights in this work, that which owes to the great specialization of the work that is presented even in the archaeological studies including in the open ground of the topography, mapping, excavation and, of course, in the laboratory tests. Most of the articles are the result of several years of investigation and it has been consigned in the responsibility of each article. The texts here gathered emphasize the technical aspects of each investigation, the modern compute systems applied to the prospecting and the archaeological mapping, the chemical and physical analysis of organic materials, of metal artifacts, of diverse rocks used in the pre hispanic epoch, of mural and ceramic paintings, characteristics that justly underline the potential of the collective works. (Author)

  18. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    Directory of Open Access Journals (Sweden)

    K. J. Tobin

    2017-09-01

    Full Text Available This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM. Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI. These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined. All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T. We examined five different eras (1997–2002; 2002–2005; 2005–2008; 2008–2011; 2011–2014 that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM program (25 cm, Soil Climate Analysis Network (SCAN; 20.32 cm, SNOwpack TELemetry (SNOTEL; 20.32 cm, and the US Climate Reference Network (USCRN; 20 cm. We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM. Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5. The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE and calculation based on the normalized difference vegetation index (NDVI value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally

  19. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    Science.gov (United States)

    Tobin, Kenneth J.; Torres, Roberto; Crow, Wade T.; Bennett, Marvin E.

    2017-09-01

    This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI). These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined). All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI) that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T). We examined five different eras (1997-2002; 2002-2005; 2005-2008; 2008-2011; 2011-2014) that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM) program (25 cm), Soil Climate Analysis Network (SCAN; 20.32 cm), SNOwpack TELemetry (SNOTEL; 20.32 cm), and the US Climate Reference Network (USCRN; 20 cm). We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM). Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5). The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE) and calculation based on the normalized difference vegetation index (NDVI) value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally outperformed NDVI-based estimates

  20. Applying machine learning classification techniques to automate sky object cataloguing

    Science.gov (United States)

    Fayyad, Usama M.; Doyle, Richard J.; Weir, W. Nick; Djorgovski, Stanislav

    1993-08-01

    We describe the application of an Artificial Intelligence machine learning techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Mt. Palomar Northern Sky Survey is nearly completed. This survey provides comprehensive coverage of the northern celestial hemisphere in the form of photographic plates. The plates are being transformed into digitized images whose quality will probably not be surpassed in the next ten to twenty years. The images are expected to contain on the order of 107 galaxies and 108 stars. Astronomers wish to determine which of these sky objects belong to various classes of galaxies and stars. Unfortunately, the size of this data set precludes analysis in an exclusively manual fashion. Our approach is to develop a software system which integrates the functions of independently developed techniques for image processing and data classification. Digitized sky images are passed through image processing routines to identify sky objects and to extract a set of features for each object. These routines are used to help select a useful set of attributes for classifying sky objects. Then GID3 (Generalized ID3) and O-B Tree, two inductive learning techniques, learns classification decision trees from examples. These classifiers will then be applied to new data. These developmnent process is highly interactive, with astronomer input playing a vital role. Astronomers refine the feature set used to construct sky object descriptions, and evaluate the performance of the automated classification technique on new data. This paper gives an overview of the machine learning techniques with an emphasis on their general applicability, describes the details of our specific application, and reports the initial encouraging results. The results indicate that our machine learning approach is well-suited to the problem. The primary benefit of the approach is increased data reduction throughput. Another benefit is

  1. A Novel (DDCC-SFG-Based Systematic Design Technique of Active Filters

    Directory of Open Access Journals (Sweden)

    M. Fakhfakh

    2013-09-01

    Full Text Available This paper proposes a novel idea for the synthesis of active filters that is based on the use of signal-flow graph (SFG stamps of differential difference current conveyors (DDCCs. On the basis of an RLC passive network or a filter symbolic transfer function, an equivalent SFG is constructed. DDCCs’ SFGs are identified inside the constructed ‘active’ graph, and thus the equivalent circuit can be easily synthesized. We show that the DDCC and its ‘derivatives’, i.e. differential voltage current conveyors and the conventional current conveyors, are the main basic building blocks in such design. The practicability of the proposed technique is showcased via three application examples. Spice simulations are given to show the viability of the proposed technique.

  2. Compact dual-band bandpass filter based on signal-interference techniques

    Science.gov (United States)

    Ma, Xingbing; Jiang, Ting

    2017-08-01

    To realize good isolation between two signal passbands, a dual-band bandpass filter (BPF) in this article was presented using signal-interference techniques, in which five open loop resonators are adopted. The proposed filter topology is made up of two signal transmission paths in parallel, under signal-interference principles, overlap section of two original passbands, decided respectively by two different transmission paths, is selectively removed from the combined passband, as a result, two aim passbands are realized. In addition, good isolation between two aim passbands is established due to two new transmission zeros, produced by adopted signal-interference techniques. At last, good agreement can be observed between simulation and measurement.

  3. Applying Metrological Techniques to Satellite Fundamental Climate Data Records

    Science.gov (United States)

    Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.

    2018-02-01

    Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.

  4. Airflow measurement techniques applied to radon mitigation problems

    International Nuclear Information System (INIS)

    Harrje, D.T.; Gadsby, K.J.

    1989-01-01

    During the past decade a multitude of diagnostic procedures associated with the evaluation of air infiltration and air leakage sites have been developed. The spirit of international cooperation and exchange of ideas within the AIC-AIVC conferences has greatly facilitated the adoption and use of these measurement techniques in the countries participating in Annex V. But wide application of such diagnostic methods are not limited to air infiltration alone. The subject of this paper concerns the ways to evaluate and improve radon reduction in buildings using diagnostic methods directly related to developments familiar to the AIVC. Radon problems are certainly not unique to the United States, and the methods described here have to a degree been applied by researchers of other countries faced with similar problems. The radon problem involves more than a harmful pollutant of the living spaces of our buildings -- it also involves energy to operate radon removal equipment and the loss of interior conditioned air as a direct result. The techniques used for air infiltration evaluation will be shown to be very useful in dealing with the radon mitigation challenge. 10 refs., 7 figs., 1 tab

  5. Estimating particulate black carbon concentrations using two offline light absorption methods applied to four types of filter media

    Science.gov (United States)

    Davy, Pamela M.; Tremper, Anja H.; Nicolosi, Eleonora M. G.; Quincey, Paul; Fuller, Gary W.

    2017-03-01

    Atmospheric particulate black carbon has been linked to adverse health outcomes. Additional black carbon measurements would aid a better understanding of population exposure in epidemiological studies as well as the success, or otherwise, of relevant abatement technologies and policies. Two light absorption measurement methods of particles collected on filters have been applied to four different types of filters to provide estimations of particulate black carbon concentrations. The ratio of transmittance (lnI0/I) to reflectance (lnR0/R) varied by filter type and ranged from close to 0.5 (as expected from simple theory) to 1.35 between the four filter types tested. The relationship between light absorption and black carbon, measured by the thermal EC(TOT) method, was nonlinear and differed between filter type and measurement method. This is particularly relevant to epidemiological studies that use light absorption as an exposure metric. An extensive archive of filters was used to derive loading factors and mass extinction coefficients for each filter type. Particulate black carbon time series were then calculated at locations where such measurements were not previously available. When applied to two roads in London, black carbon concentrations were found to have increased between 2011 and 2013, by 0.3 (CI: -0.1, 0.5) and 0.4 (CI: 0.1, 0.9) μg m-3 year-1, in contrast to the expectation from exhaust abatement policies. New opportunities using archived or bespoke filter collections for studies on the health effects of black carbon and the efficacy of abatement strategies are created.

  6. Analytical techniques applied to study cultural heritage objects

    International Nuclear Information System (INIS)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N.

    2015-01-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  7. Analytical techniques applied to study cultural heritage objects

    Energy Technology Data Exchange (ETDEWEB)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N., E-mail: rizzutto@if.usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica

    2015-07-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  8. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie [The First Hospital of China Medical University, Shenyang (China); Huang, De Sheng [College of Basic Medical Science, China Medical University, Shenyang (China)

    2012-07-15

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 {+-} 4.14 degrees. In Group C, the average ACF was 7.1 {+-} 4.52 degrees. In Group T, the average ACF was 4.4 {+-} 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 {+-} 4.59 vs. 5.1 {+-} 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF {>=} 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X{sup 2} = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X{sup 2} = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  9. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    International Nuclear Information System (INIS)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie; Huang, De Sheng

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X 2 = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X 2 = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  10. Applying advanced digital signal processing techniques in industrial radioisotopes applications

    International Nuclear Information System (INIS)

    Mahmoud, H.K.A.E.

    2012-01-01

    Radioisotopes can be used to obtain signals or images in order to recognize the information inside the industrial systems. The main problems of using these techniques are the difficulty of identification of the obtained signals or images and the requirement of skilled experts for the interpretation process of the output data of these applications. Now, the interpretation of the output data from these applications is performed mainly manually, depending heavily on the skills and the experience of trained operators. This process is time consuming and the results typically suffer from inconsistency and errors. The objective of the thesis is to apply the advanced digital signal processing techniques for improving the treatment and the interpretation of the output data from the different Industrial Radioisotopes Applications (IRA). This thesis focuses on two IRA; the Residence Time Distribution (RTD) measurement and the defect inspection of welded pipes using a gamma source (gamma radiography). In RTD measurement application, this thesis presents methods for signal pre-processing and modeling of the RTD signals. Simulation results have been presented for two case studies. The first case study is a laboratory experiment for measuring the RTD in a water flow rig. The second case study is an experiment for measuring the RTD in a phosphate production unit. The thesis proposes an approach for RTD signal identification in the presence of noise. In this approach, after signal processing, the Mel Frequency Cepstral Coefficients (MFCCs) and polynomial coefficients are extracted from the processed signal or from one of its transforms. The Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT), and Discrete Sine Transform (DST) have been tested and compared for efficient feature extraction. Neural networks have been used for matching of the extracted features. Furthermore, the Power Density Spectrum (PDS) of the RTD signal has been also used instead of the discrete

  11. Satellite SAR interferometric techniques applied to emergency mapping

    Science.gov (United States)

    Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene

    2017-04-01

    This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce

  12. Gating Techniques for Rao-Blackwellized Monte Carlo Data Association Filter

    Directory of Open Access Journals (Sweden)

    Yazhao Wang

    2014-01-01

    Full Text Available This paper studies the Rao-Blackwellized Monte Carlo data association (RBMCDA filter for multiple target tracking. The elliptical gating strategies are redesigned and incorporated into the framework of the RBMCDA filter. The obvious benefit is the reduction of the time cost because the data association procedure can be carried out with less validated measurements. In addition, the overlapped parts of the neighboring validation regions are divided into several separated subregions according to the possible origins of the validated measurements. In these subregions, the measurement uncertainties can be taken into account more reasonably than those of the simple elliptical gate. This would help to achieve higher tracking ability of the RBMCDA algorithm by a better association prior approximation. Simulation results are provided to show the effectiveness of the proposed gating techniques.

  13. Advances in Uncertainty Representation and Management for Particle Filtering Applied to Prognostics

    Data.gov (United States)

    National Aeronautics and Space Administration — Particle filters (PF) have been established as the de facto state of the art in failure prognosis. They combine advantages of the rigors of Bayesian estimation to...

  14. Applying machine-learning techniques to Twitter data for automatic hazard-event classification.

    Science.gov (United States)

    Filgueira, R.; Bee, E. J.; Diaz-Doce, D.; Poole, J., Sr.; Singh, A.

    2017-12-01

    The constant flow of information offered by tweets provides valuable information about all sorts of events at a high temporal and spatial resolution. Over the past year we have been analyzing in real-time geological hazards/phenomenon, such as earthquakes, volcanic eruptions, landslides, floods or the aurora, as part of the GeoSocial project, by geo-locating tweets filtered by keywords in a web-map. However, not all the filtered tweets are related with hazard/phenomenon events. This work explores two classification techniques for automatic hazard-event categorization based on tweets about the "Aurora". First, tweets were filtered using aurora-related keywords, removing stop words and selecting the ones written in English. For classifying the remaining between "aurora-event" or "no-aurora-event" categories, we compared two state-of-art techniques: Support Vector Machine (SVM) and Deep Convolutional Neural Networks (CNN) algorithms. Both approaches belong to the family of supervised learning algorithms, which make predictions based on labelled training dataset. Therefore, we created a training dataset by tagging 1200 tweets between both categories. The general form of SVM is used to separate two classes by a function (kernel). We compared the performance of four different kernels (Linear Regression, Logistic Regression, Multinomial Naïve Bayesian and Stochastic Gradient Descent) provided by Scikit-Learn library using our training dataset to build the SVM classifier. The results shown that the Logistic Regression (LR) gets the best accuracy (87%). So, we selected the SVM-LR classifier to categorise a large collection of tweets using the "dispel4py" framework.Later, we developed a CNN classifier, where the first layer embeds words into low-dimensional vectors. The next layer performs convolutions over the embedded word vectors. Results from the convolutional layer are max-pooled into a long feature vector, which is classified using a softmax layer. The CNN's accuracy

  15. Non destructive assay techniques applied to nuclear materials

    International Nuclear Information System (INIS)

    Gavron, A.

    2001-01-01

    Nondestructive assay is a suite of techniques that has matured and become precise, easily implementable, and remotely usable. These techniques provide elaborate safeguards of nuclear material by providing the necessary information for materials accounting. NDA techniques are ubiquitous, reliable, essentially tamper proof, and simple to use. They make the world a safer place to live in, and they make nuclear energy viable. (author)

  16. Applied research on air pollution using nuclear-related analytical techniques

    International Nuclear Information System (INIS)

    1994-01-01

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which will run from 1992-1996, and will build upon the experience gained by the Agency from the laboratory support that it has been providing for several years to BAPMoN - the Background Air Pollution Monitoring Network programme organized under the auspices of the World Meterological Organization. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XFR, and PIXE for the analysis of toxic and other trace elements in suspended particulate matter (including air filter samples), rainwater and fog-water samples, and in biological indicators of air pollution (e.g. lichens and mosses). The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for practically-oriented research and monitoring studies on air pollution ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural areas). This document reports the discussions held during the first Research Co-ordination Meeting (RCM) for the CRP which took place at the IAEA Headquarters in Vienna. Refs, figs and tabs

  17. Defining Prolonged Dwell Time: When Are Advanced Inferior Vena Cava Filter Retrieval Techniques Necessary? An Analysis in 762 Procedures.

    Science.gov (United States)

    Desai, Kush R; Laws, James L; Salem, Riad; Mouli, Samdeep K; Errea, Martin F; Karp, Jennifer K; Yang, Yihe; Ryu, Robert K; Lewandowski, Robert J

    2017-06-01

    Despite growth in placement of retrievable inferior vena cava filters, retrieval rates remain low. Filters with extended implantation times present a challenge to retrieval, where standard techniques often fail. The development of advanced retrieval techniques has positively impacted retrieval of retrievable inferior vena cava filters with prolonged dwell times; however, there is no precise definition of the time point when advanced techniques become necessary. We aim to define prolonged retrievable inferior vena cava filters dwell time by determining the inflection point when the risk of standard retrieval technique failure increases significantly, necessitating advanced retrieval techniques to maintain overall technical success of retrieval. From January 2009 to April 2015, 762 retrieval procedures were identified from a prospectively acquired database. We assessed patient age/sex, filter dwell time, procedural technical success, the use of advanced techniques, and procedure-related adverse events. Overall retrieval success rate was 98% (n=745). When standard retrieval techniques failed, advanced techniques were used; this was necessary 18% of the time (n=138). Logistic regression identified that dwell time was the only risk factor for failure of standard retrieval technique (odds ratio, 1.08; 95% confidence interval, 1.05-1.10; P technique failure was 40.9%. Adverse events occurred at a rate of 2% (n=18; 15 minor and 3 major). The necessity of advanced techniques to maintain technical success of retrieval increases with dwell time. Patients with retrievable inferior vena cava filters in place beyond 7 months may benefit from referral to centers with expertise in advanced filter retrieval. © 2017 American Heart Association, Inc.

  18. Spatio-temporal filtering techniques for the detection of disaster-related communication.

    Science.gov (United States)

    Fitzhugh, Sean M; Ben Gibson, C; Spiro, Emma S; Butts, Carter T

    2016-09-01

    Individuals predominantly exchange information with one another through informal, interpersonal channels. During disasters and other disrupted settings, information spread through informal channels regularly outpaces official information provided by public officials and the press. Social scientists have long examined this kind of informal communication in the rumoring literature, but studying rumoring in disrupted settings has posed numerous methodological challenges. Measuring features of informal communication-timing, content, location-with any degree of precision has historically been extremely challenging in small studies and infeasible at large scales. We address this challenge by using online, informal communication from a popular microblogging website and for which we have precise spatial and temporal metadata. While the online environment provides a new means for observing rumoring, the abundance of data poses challenges for parsing hazard-related rumoring from countless other topics in numerous streams of communication. Rumoring about disaster events is typically temporally and spatially constrained to places where that event is salient. Accordingly, we use spatio and temporal subsampling to increase the resolution of our detection techniques. By filtering out data from known sources of error (per rumor theories), we greatly enhance the signal of disaster-related rumoring activity. We use these spatio-temporal filtering techniques to detect rumoring during a variety of disaster events, from high-casualty events in major population centers to minimally destructive events in remote areas. We consistently find three phases of response: anticipatory excitation where warnings and alerts are issued ahead of an event, primary excitation in and around the impacted area, and secondary excitation which frequently brings a convergence of attention from distant locales onto locations impacted by the event. Our results demonstrate the promise of spatio

  19. Technology optimization techniques for multicomponent optical band-pass filter manufacturing

    Science.gov (United States)

    Baranov, Yuri P.; Gryaznov, Georgiy M.; Rodionov, Andrey Y.; Obrezkov, Andrey V.; Medvedev, Roman V.; Chivanov, Alexey N.

    2016-04-01

    Narrowband optical devices (like IR-sensing devices, celestial navigation systems, solar-blind UV-systems and many others) are one of the most fast-growing areas in optical manufacturing. However, signal strength in this type of applications is quite low and performance of devices depends on attenuation level of wavelengths out of operating range. Modern detectors (photodiodes, matrix detectors, photomultiplier tubes and others) usually do not have required selectivity or have higher sensitivity to background spectrum at worst. Manufacturing of a single component band-pass filter with high attenuation level of wavelength is resource-intensive task. Sometimes it's not possible to find solution for this problem using existing technologies. Different types of filters have technology variations of transmittance profile shape due to various production factors. At the same time there are multiple tasks with strict requirements for background spectrum attenuation in narrowband optical devices. For example, in solar-blind UV-system wavelengths above 290-300 nm must be attenuated by 180dB. In this paper techniques of multi-component optical band-pass filters assembly from multiple single elements with technology variations of transmittance profile shape for optimal signal-tonoise ratio (SNR) were proposed. Relationships between signal-to-noise ratio and different characteristics of transmittance profile shape were shown. Obtained practical results were in rather good agreement with our calculations.

  20. Image restoration technique using median filter combined with decision tree algorithm

    International Nuclear Information System (INIS)

    Sethu, D.; Assadi, H.M.; Hasson, F.N.; Hasson, N.N.

    2007-01-01

    Images are usually corrupted during transmission principally due to interface in the channel used for transmission. Images also be impaired by the addition of various forms of noise. Salt and pepper is commonly used to impair the image. Salt and pepper noise can be caused by errors in data transmission, malfunctioning pixel elements in camera sensors, and timing errors in the digitization process. During the filtering of noisy image, important features such as edges, lines and other fine image details embedded in the image tends to blur because of filtering operation. The enhancement of noisy data, however, is a very critical process because the sharpening operation can significantly increase the noise. In this respect, contrast enhancement is often necessary in order to highlight details that have been blurred. In this proposed approach we aim to develop image processing technique that can meet this new requirement, which are high quality and high speed. Furthermore, prevent the noise accretion during the sharpening of the image details, and compare the restored images via proposed method with other kinds of filters. (author)

  1. Spectral-based features ranking for gamelan instruments identification using filter techniques

    Directory of Open Access Journals (Sweden)

    Diah P Wulandari

    2013-03-01

    Full Text Available In this paper, we describe an approach of spectral-based features ranking for Javanese gamelaninstruments identification using filter techniques. The model extracted spectral-based features set of thesignal using Short Time Fourier Transform (STFT. The rank of the features was determined using the fivealgorithms; namely ReliefF, Chi-Squared, Information Gain, Gain Ratio, and Symmetric Uncertainty. Then,we tested the ranked features by cross validation using Support Vector Machine (SVM. The experimentshowed that Gain Ratio algorithm gave the best result, it yielded accuracy of 98.93%.

  2. Growth of silicone-immobilized bacteria on polycarbonate membrane filters, a technique to study microcolony formation under anaerobic conditions.

    OpenAIRE

    Højberg, O; Binnerup, S J; Sørensen, J

    1997-01-01

    A technique was developed to study microcolony formation by silicone-immobilized bacteria on polycarbonate membrane filters under anaerobic conditions. A sudden shift to anaerobiosis was obtained by submerging the filters in medium which was depleted for oxygen by a pure culture of bacteria. The technique was used to demonstrate that preinduction of nitrate reductase under low-oxygen conditions was necessary for nonfermenting, nitrate-respiring bacteria, e.g., Pseudomonas spp., to cope with a...

  3. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    Science.gov (United States)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  4. Harmonic Mitigation Techniques Applied to Power Distribution Networks

    Directory of Open Access Journals (Sweden)

    Hussein A. Kazem

    2013-01-01

    Full Text Available A growing number of harmonic mitigation techniques are now available including active and passive methods, and the selection of the best-suited technique for a particular case can be a complicated decision-making process. The performance of some of these techniques is largely dependent on system conditions, while others require extensive system analysis to prevent resonance problems and capacitor failure. A classification of the various available harmonic mitigation techniques is presented in this paper aimed at presenting a review of harmonic mitigation methods to researchers, designers, and engineers dealing with power distribution systems.

  5. The cubature smooth variable structure filter estimation strategy applied to a quadrotor controller

    Science.gov (United States)

    Al-Shabi, M.; Gadsden, S. A.; Wilkerson, S. A.

    2015-05-01

    Unmanned aerial systems (UAS) are becoming increasingly popular in industry, military, and social environments. An UAS that provides good operating performance and robustness to disturbances is often quite expensive and prohibitive to the general public. To improve UAS performance without affecting the overall cost, an estimation strategy can be implemented on the internal controller. The use of an estimation strategy or filter reduces the number of required sensors and power requirement, and improves the controller performance. UAS devices are highly nonlinear, and implementation of filters can be quite challenging. This paper presents the implementation of the relatively new cubature smooth variable structure filter (CSVSF) on a quadrotor controller. The results are compared with other state and parameter estimation strategies.

  6. Biomechanical study of the funnel technique applied in thoracic ...

    African Journals Online (AJOL)

    Background: Funnel technique is a method used for the insertion of screw into thoracic pedicle. Aim: To evaluate the biomechanical characteristics of thoracic pedicle screw placement using the Funnel technique, trying to provide biomechanical basis for clinical application of this technology. Methods: 14 functional spinal ...

  7. Nuclear and Isotopic Techniques Applied to Nutritional and ...

    African Journals Online (AJOL)

    Nuclear and isotopes methods have been used in industrialized countries to enhance the sensitivity of nutrition and environmental monitoring techniques. The isotope techniques used in nutrition research are: (i) deuterium dilution to measure total body water (TBW) and body composition for evaluating nutritional status, ...

  8. NEW TECHNIQUES APPLIED IN ECONOMICS. ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Constantin Ilie

    2009-05-01

    Full Text Available The present paper has the objective to inform the public regarding the use of new techniques for the modeling, simulate and forecast of system from different field of activity. One of those techniques is Artificial Neural Network, one of the artificial in

  9. A Permutation Encoding Technique Applied to Genetic Algorithm ...

    African Journals Online (AJOL)

    In this paper, a permutation chromosome encoding scheme is proposed for obtaining solution to resource constrained project scheduling problem. The proposed chromosome coding method is applied to Genetic algorithm procedure and implemented through object oriented programming. The method is applied to a ...

  10. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    Science.gov (United States)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  11. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  12. Analysis of design parameters for crosstalk cancellation filters applied to different loudspeaker configurations

    DEFF Research Database (Denmark)

    Lacouture Parodi, Yesenia; Rubak, Per

    2011-01-01

    simulated for each case. In order to obtain optimum parameters the bandwidths, filter lengths and regularization constants were varied for each loudspeaker configuration and each method and a description is given of the simulations performed and the results obtained. The simulation results are documented...

  13. Sharpening minimum-phase filters

    Science.gov (United States)

    Jovanovic Dolecek, G.; Fernandez-Vazquez, A.

    2013-02-01

    The minimum-phase requirement restricts that filter has all its zeros on or inside the unit circle. As a result the filter does not have a linear phase. It is well known that the sharpening technique can be used to simultaneous improvements of both the pass-band and stop-band of a linear-phase FIR filters and cannot be used for other types of filters. In this paper we demonstrate that the sharpening technique can also be applied to minimum-phase filters, after small modification. The method is illustrated with one practical examples of design.

  14. Transdermal penetration of topically applied fluorescent dyes with and without the influence of Water Filtered Infrared-A-Radiation

    OpenAIRE

    Grone, Diego

    2010-01-01

    Optical methods were used to investigate the influence of water filtered infrared A radiation (wIRA) on the dermatopharmacokinetics of topically applied substances. The penetration profiles of the hydrophilic dye fluoresceine and the lipophilic dye curcumin in a standard o/w emulsion were determined by tape stripping, in combination with spectroscopic measurements. Additionally, the penetration was investigated in vivo by laser scanning microscopy. Three different protocols (mode A-C) were us...

  15. Applying decision-making techniques to Civil Engineering Projects

    Directory of Open Access Journals (Sweden)

    Fam F. Abdel-malak

    2017-12-01

    Full Text Available Multi-Criteria Decision-Making (MCDM techniques are found to be useful tools in project managers’ hands to overcome decision-making (DM problems in Civil Engineering Projects (CEPs. The main contribution of this paper includes selecting and studying the popular MCDM techniques that uses different and wide ranges of data types in CEPs. A detailed study including advantages and pitfalls of using the Analytic Hierarchy Process (AHP and Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (Fuzzy TOPSIS is introduced. Those two techniques are selected for the purpose of forming a package that covers most available data types in CEPs. The results indicated that AHP has a structure which simplifies complicated problems, while Fuzzy TOPSIS uses the advantages of linguistic variables to solve the issue of undocumented data and ill-defined problems. Furthermore, AHP is a simple technique that depends on pairwise comparisons of factors and natural attributes, beside it is preferable for widely spread hierarchies. On the other hand, Fuzzy TOPSIS needs more information but works well for the one-tier decision tree as well as it shows more flexibility to work in fuzzy environments. The two techniques have the facility to be integrated and combined in a new module to support most of the decisions required in CEPs. Keywords: Decision-making, AHP, Fuzzy TOPSIS, CBA, Civil Engineering Projects

  16. Object oriented programming techniques applied to device access and control

    International Nuclear Information System (INIS)

    Goetz, A.; Klotz, W.D.; Meyer, J.

    1992-01-01

    In this paper a model, called the device server model, has been presented for solving the problem of device access and control faced by all control systems. Object Oriented Programming techniques were used to achieve a powerful yet flexible solution. The model provides a solution to the problem which hides device dependancies. It defines a software framework which has to be respected by implementors of device classes - this is very useful for developing groupware. The decision to implement remote access in the root class means that device servers can be easily integrated in a distributed control system. A lot of the advantages and features of the device server model are due to the adoption of OOP techniques. The main conclusion that can be drawn from this paper is that 1. the device access and control problem is adapted to being solved with OOP techniques, 2. OOP techniques offer a distinct advantage over traditional programming techniques for solving the device access problem. (J.P.N.)

  17. Applying motivational interviewing techniques to palliative care communication.

    Science.gov (United States)

    Pollak, Kathryn I; Childers, Julie W; Arnold, Robert M

    2011-05-01

    Palliative care relies heavily on communication. Although some guidelines do address difficult communication, less is known about how to handle conversations with patients who express ambivalence or resistance to such care. Clinicians also struggle with how to support patient autonomy when they disagree with patient choices. Motivational Interviewing (MI) techniques may help address these responses. Specifically, MI techniques such as reflective statements and summarizing can help reduce a patient's resistance, resolve patient ambivalence, and support patient autonomy. Not all the MI techniques are applicable, however, in part because palliative care clinicians do not guide patients to make particular choices but, instead, help patients make choices that are consistent with patient values. Some elements from MI can be used to improve the quality and efficacy of palliative care conversations.

  18. Rare event techniques applied in the Rasmussen study

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1977-01-01

    The Rasmussen Study estimated public risks from commercial nuclear power plant accidents, and therefore the statistics of rare events had to be treated. Two types of rare events were specifically handled, those rare events which were probabilistically rare events and those which were statistically rare events. Four techniques were used to estimate probabilities of rare events. These techniques were aggregating data samples, discretizing ''continuous'' events, extrapolating from minor to catastrophic severities, and decomposing events using event trees and fault trees. In aggregating or combining data the goal was to enlarge the data sample so that the rare event was no longer rare, i.e., so that the enlarged data sample contained one or more occurrences of the event of interest. This aggregation gave rise to random variable treatments of failure rates, occurrence frequencies, and other characteristics estimated from data. This random variable treatment can be interpreted as being comparable to an empirical Bayes technique or a Bayesian technique. In the discretizing event technique, events of a detailed nature were grouped together into a grosser event for purposes of analysis as well as for data collection. The treatment of data characteristics as random variables helped to account for the uncertainties arising from this discretizing. In the severity extrapolation technique a severity variable was associated with each event occurrence for the purpose of predicting probabilities of catastrophic occurrences. Tail behaviors of distributions therefore needed to be considered. Finally, event trees and fault trees were used to express accident occurrences and system failures in terms of more basic events for which data existed. Common mode failures and general dependencies therefore needed to be treated. 2 figures

  19. Bioremediation techniques applied to aqueous media contaminated with mercury.

    Science.gov (United States)

    Velásquez-Riaño, Möritz; Benavides-Otaya, Holman D

    2016-12-01

    In recent years, the environmental and human health impacts of mercury contamination have driven the search for alternative, eco-efficient techniques different from the traditional physicochemical methods for treating this metal. One of these alternative processes is bioremediation. A comprehensive analysis of the different variables that can affect this process is presented. It focuses on determining the effectiveness of different techniques of bioremediation, with a specific consideration of three variables: the removal percentage, time needed for bioremediation and initial concentration of mercury to be treated in an aqueous medium.

  20. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    Science.gov (United States)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  1. Techniques applied in design optimization of parallel manipulators

    CSIR Research Space (South Africa)

    Modungwa, D

    2011-11-01

    Full Text Available the process of optimization a cumbersome and time-consuming endeavour, especially when the variables are diverse and objective functions are excessively complex. Thus, several techniques devised by researchers to solve the problem are reviewed in this paper....

  2. Flash radiographic technique applied to fuel injector sprays

    International Nuclear Information System (INIS)

    Vantine, H.C.

    1977-01-01

    A flash radiographic technique, using 50 ns exposure times, was used to study the pattern and density distribution of a fuel injector spray. The experimental apparatus and method are described. An 85 kVp flash x-ray generator, designed and fabricated at the Lawrence Livermore Laboratory, is utilized. Radiographic images, recorded on standard x-ray films, are digitized and computer processed

  3. Ion backscattering techniques applied in materials science research

    International Nuclear Information System (INIS)

    Sood, D.K.

    1978-01-01

    The applications of Ion Backscattering Technique (IBT) to material analysis have expanded rapidly during the last decade. It is now regarded as an analysis tool indispensable for a versatile materials research program. The technique consists of simply shooting a beam of monoenergetic ions (usually 4 He + ions at about 2 MeV) onto a target, and measuring their energy distribution after backscattering at a fixed angle. Simple Rutherford scattering analysis of the backscattered ion spectrum yields information on the mass, the absolute amount and the depth profile of elements present upto a few microns of the target surface. The technique is nondestructive, quick, quantitative and the only known method of analysis which gives quantitative results without recourse to calibration standards. Its major limitations are the inability to separate elements of similar mass and a complete absence of chemical-binding information. A typical experimental set up and spectrum analysis have been described. Examples, some of them based on the work at the Bhabha Atomic Research Centre, Bombay, have been given to illustrate the applications of this technique to semiconductor technology, thin film materials science and nuclear energy materials. Limitations of IBT have been illustrated and a few remedies to partly overcome these limitations are presented. (auth.)

  4. X-diffraction technique applied for nano system metrology

    International Nuclear Information System (INIS)

    Kuznetsov, Alexei Yu.; Machado, Rogerio; Robertis, Eveline de; Campos, Andrea P.C.; Archanjo, Braulio S.; Gomes, Lincoln S.; Achete, Carlos A.

    2009-01-01

    The application of nano materials are fast growing in all industrial sectors, with a strong necessity in nano metrology and normalizing in the nano material area. The great potential of the X-ray diffraction technique in this field is illustrated at the example of metals, metal oxides and pharmaceuticals

  5. Consulting with Parents: Applying Family Systems Concepts and Techniques.

    Science.gov (United States)

    Mullis, Fran; Edwards, Dana

    2001-01-01

    This article describes family systems concepts and techniques that school counselors, as consultants, can use to better understand the family system. The concepts are life cycle transitions and extrafamilial influences, extended family influences, boundaries, parental hierarchy and power, and triangulation. (Contains 39 references.) (GCP)

  6. Eddy current technique applied to automated tube profilometry

    International Nuclear Information System (INIS)

    Dobbeni, D.; Melsen, C. van

    1982-01-01

    The use of eddy current methods in the first totally automated pre-service inspection of the internal diameter of PWR steam generator tubes is described. The technique was developed at Laborelec, the Belgian Laboratory of the Electricity Supply Industry. Details are given of the data acquisition system and of the automated manipulator. Representative tube profiles are illustrated. (U.K.)

  7. Applying the digital-image-correlation technique to measure the ...

    Indian Academy of Sciences (India)

    4.2 Image analysis. The DIC technique is used to analyse the column deformation. After the position of every mark is traced, two parallel observation lines on the surface of column (as shown in figure 8) are cho- sen. There are 181 equal spaced points on each line. The positions of these points are calculated using B-Spline ...

  8. Analysis of the Mechanism of Gram Differentiation by Use of a Filter-Paper Chromatographic Technique.

    Science.gov (United States)

    Bartholomew, J W; Cromwell, T; Gan, R

    1965-09-01

    Bartholomew, J. W. (University of Southern California, Los Angeles), Thomas Cromwell, and Richard Gan. Analysis of the mechanism of Gram differentiation by use of a filter-paper chromatographic technique. J. Bacteriol. 90:766-777. 1965.-Data are presented which demonstrate that the mechanism of gram-positivity could not be due solely to factors such as a single, specific gram-positive substrate, specific affinities of crystal violet for certain cellular components, a specific crystal violet-iodine-substrate complex, or to any specific characteristic of the dye, iodine, or solvent molecules. Ruptured cells of gram-positive organisms stain gram-negatively when subjected to a standard Gram-stain procedure. However, when stained fragments of broken cells were deposited in thick layers on the surface of filter-paper strips and exposed to decolorizers, the rate of dye release correlated with the Gram characteristic of the intact cell. Therefore, the intact cell in itself is not an absolute requirement for Gram differentiation. The data are interpreted as indicating that the mechanism of Gram differentiation primarily involves the rate of permeation of molecules (dye, iodine, solvent) through the interstitial spaces of cell-wall material.

  9. On the effects of quantization on mismatched pulse compression filters designed using L-p norm minimization techniques

    CSIR Research Space (South Africa)

    Cilliers, Jacques E

    2007-10-01

    Full Text Available In [1] the authors introduced a technique for generating mismatched pulse compression filters for linear frequency chirp signals. The technique minimizes the sum of the pulse compression sidelobes in a p L –norm sense. It was shown that extremely...

  10. Prediction of load threshold of fibre-reinforced laminated composite panels subjected to low velocity drop-weight impact using efficient data filtering techniques

    Directory of Open Access Journals (Sweden)

    Umar Farooq

    2015-01-01

    Full Text Available This work is concerned with physical testing of carbon fibrous laminated composite panels with low velocity drop-weight impacts from flat and round nose impactors. Eight, sixteen, and twenty-four ply panels were considered. Non-destructive damage inspections of tested specimens were conducted to approximate impact-induced damage. Recorded data were correlated to load–time, load–deflection, and energy–time history plots to interpret impact induced damage. Data filtering techniques were also applied to the noisy data that unavoidably generate due to limitations of testing and logging systems. Built-in, statistical, and numerical filters effectively predicted load thresholds for eight and sixteen ply laminates. However, flat nose impact of twenty-four ply laminates produced clipped data that can only be de-noised involving oscillatory algorithms. Data filtering and extrapolation of such data have received rare attention in the literature that needs to be investigated. The present work demonstrated filtering and extrapolation of the clipped data using Fast Fourier Convolution algorithm to predict load thresholds. Selected results were compared to the damage zones identified with C-scan and acceptable agreements have been observed. Based on the results it is proposed that use of advanced data filtering and analysis methods to data collected by the available resources has effectively enhanced data interpretations without resorting to additional resources. The methodology could be useful for efficient and reliable data analysis and impact-induced damage prediction of similar cases’ data.

  11. Machine-learning techniques applied to antibacterial drug discovery.

    Science.gov (United States)

    Durrant, Jacob D; Amaro, Rommie E

    2015-01-01

    The emergence of drug-resistant bacteria threatens to revert humanity back to the preantibiotic era. Even now, multidrug-resistant bacterial infections annually result in millions of hospital days, billions in healthcare costs, and, most importantly, tens of thousands of lives lost. As many pharmaceutical companies have abandoned antibiotic development in search of more lucrative therapeutics, academic researchers are uniquely positioned to fill the pipeline. Traditional high-throughput screens and lead-optimization efforts are expensive and labor intensive. Computer-aided drug-discovery techniques, which are cheaper and faster, can accelerate the identification of novel antibiotics, leading to improved hit rates and faster transitions to preclinical and clinical testing. The current review describes two machine-learning techniques, neural networks and decision trees, that have been used to identify experimentally validated antibiotics. We conclude by describing the future directions of this exciting field. © 2015 John Wiley & Sons A/S.

  12. Applying program comprehension techniques to Karel robot programs

    OpenAIRE

    Oliveira, Nuno; Henriques, Pedro; Cruz, Daniela; Pereira, Maria João; Mernik, Marjan; Kosar, Tomaz; Crepinsek, Matej

    2009-01-01

    Abstract—In the context of program understanding, a challenge research topic1 is to learn how techniques and tools for the comprehension of General-Purpose Languages (GPLs) can be used or adjusted to the understanding of Domain-Specific Languages (DSLs). Being DSLs tailored for the description of problems within a specific domain, it becomes easier to improve these tools with specific visualizations (at a higher abstraction level, closer to the problem level) in order to understand the ...

  13. Electrochemical Techniques Applied to Studies of Microbiologically Influenced Corrosion (MIC)

    Science.gov (United States)

    1992-01-01

    corrosion (MIC). Applications Include evaluation of MIC of metals exposed 7, to seawater, fresh water, demineralized water, process chemicals, food stuffs...water, process chemicals, food stuffs, soils, aircraft important to focus elecatrochemical investigations notfuels, human plasma, and sewage. In this...negative than (CONICET-NSF). LaPlata, Argentina, Aquatec, E,., Progress can only be made if surface analytical Quimica , pp. 119-133 techniques are

  14. Applying the GNSS Volcanic Ash Plume Detection Technique to Consumer Navigation Receivers

    Science.gov (United States)

    Rainville, N.; Palo, S.; Larson, K. M.

    2017-12-01

    Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) rely on predictably structured and constant power RF signals to fulfill their primary use for navigation and timing. When the received strength of GNSS signals deviates from the expected baseline, it is typically due to a change in the local environment. This can occur when signal reflections from the ground are modified by changes in snow or soil moisture content, as well as by attenuation of the signal from volcanic ash. This effect allows GNSS signals to be used as a source for passive remote sensing. Larson et al. (2017) have developed a detection technique for volcanic ash plumes based on the attenuation seen at existing geodetic GNSS sites. Since these existing networks are relatively sparse, this technique has been extended to use lower cost consumer GNSS receiver chips to enable higher density measurements of volcanic ash. These low-cost receiver chips have been integrated into a fully stand-alone sensor, with independent power, communications, and logging capabilities as part of a Volcanic Ash Plume Receiver (VAPR) network. A mesh network of these sensors transmits data to a local base-station which then streams the data real-time to a web accessible server. Initial testing of this sensor network has uncovered that a different detection approach is necessary when using consumer GNSS receivers and antennas. The techniques to filter and process the lower quality data from consumer receivers will be discussed and will be applied to initial results from a functioning VAPR network installation.

  15. Continuous updating of a coupled reservoir-seismic model using an ensemble Kalman filter technique

    Energy Technology Data Exchange (ETDEWEB)

    Skjervheim, Jan-Arild

    2007-07-01

    This work presents the development of a method based on the ensemble Kalman filter (EnKF) for continuous reservoir model updating with respect to the combination of production data, 3D seismic data and time-lapse seismic data. The reservoir-seismic model system consists of a commercial reservoir simulator coupled to existing rock physics and seismic modelling software. The EnKF provides an ideal-setting for real time updating and prediction in reservoir simulation models, and has been applied to synthetic models and real field cases from the North Sea. In the EnKF method, static parameters as the porosity and permeability, and dynamic variables, as fluid saturations and pressure, are updated in the reservoir model at each step data become available. In addition, we have updated a lithology parameter (clay ratio) which is linked to the rock physics model, and the fracture density in a synthetic fractured reservoir. In the EnKF experiments we have assimilated various types of production and seismic data. Gas oil ratio (GOR), water cut (WCT) and bottom-hole pressure (BHP) are used in the data assimilation. Furthermore, inverted seismic data, such as Poisson's ratio and acoustic impedance, and seismic waveform data have been assimilated. In reservoir applications seismic data may introduce a large amount of data in the assimilation schemes, and the computational time becomes expensive. In this project efficient EnKF schemes are used to handle such large datasets, where challenging aspects such as the inversion of a large covariance matrix and potential loss of rank are considered. Time-lapse seismic data may be difficult to assimilate since they are time difference data, i.e. data which are related to the model variable at two or more time instances. Here we have presented a general sequential Bayesian formulation which incorporates time difference data, and we show that the posterior distribution includes both a filter and a smoother solution. Further, we show

  16. Canvas and cosmos: Visual art techniques applied to astronomy data

    Science.gov (United States)

    English, Jayanne

    Bold color images from telescopes act as extraordinary ambassadors for research astronomers because they pique the public’s curiosity. But are they snapshots documenting physical reality? Or are we looking at artistic spacescapes created by digitally manipulating astronomy images? This paper provides a tour of how original black and white data, from all regimes of the electromagnetic spectrum, are converted into the color images gracing popular magazines, numerous websites, and even clothing. The history and method of the technical construction of these images is outlined. However, the paper focuses on introducing the scientific reader to visual literacy (e.g. human perception) and techniques from art (e.g. composition, color theory) since these techniques can produce not only striking but politically powerful public outreach images. When created by research astronomers, the cultures of science and visual art can be balanced and the image can illuminate scientific results sufficiently strongly that the images are also used in research publications. Included are reflections on how they could feedback into astronomy research endeavors and future forms of visualization as well as on the relevance of outreach images to visual art. (See the color online PDF version at http://dx.doi.org/10.1142/S0218271817300105; the figures can be enlarged in PDF viewers.)

  17. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  18. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  19. Active Learning Techniques Applied to an Interdisciplinary Mineral Resources Course.

    Science.gov (United States)

    Aird, H. M.

    2015-12-01

    An interdisciplinary active learning course was introduced at the University of Puget Sound entitled 'Mineral Resources and the Environment'. Various formative assessment and active learning techniques that have been effective in other courses were adapted and implemented to improve student learning, increase retention and broaden knowledge and understanding of course material. This was an elective course targeted towards upper-level undergraduate geology and environmental majors. The course provided an introduction to the mineral resources industry, discussing geological, environmental, societal and economic aspects, legislation and the processes involved in exploration, extraction, processing, reclamation/remediation and recycling of products. Lectures and associated weekly labs were linked in subject matter; relevant readings from the recent scientific literature were assigned and discussed in the second lecture of the week. Peer-based learning was facilitated through weekly reading assignments with peer-led discussions and through group research projects, in addition to in-class exercises such as debates. Writing and research skills were developed through student groups designing, carrying out and reporting on their own semester-long research projects around the lasting effects of the historical Ruston Smelter on the biology and water systems of Tacoma. The writing of their mini grant proposals and final project reports was carried out in stages to allow for feedback before the deadline. Speakers from industry were invited to share their specialist knowledge as guest lecturers, and students were encouraged to interact with them, with a view to employment opportunities. Formative assessment techniques included jigsaw exercises, gallery walks, placemat surveys, think pair share and take-home point summaries. Summative assessment included discussion leadership, exams, homeworks, group projects, in-class exercises, field trips, and pre-discussion reading exercises

  20. Innovative Visualization Techniques applied to a Flood Scenario

    Science.gov (United States)

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other

  1. Neoliberal Optimism: Applying Market Techniques to Global Health.

    Science.gov (United States)

    Mei, Yuyang

    2017-01-01

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  2. Dust tracking techniques applied to the STARDUST facility: First results

    Energy Technology Data Exchange (ETDEWEB)

    Malizia, A., E-mail: malizia@ing.uniroma2.it [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Camplani, M. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Gelfusa, M. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Lupelli, I. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); EURATOM/CCFE Association, Culham Science Centre, Abingdon (United Kingdom); Richetta, M.; Antonelli, L.; Conetta, F.; Scarpellini, D.; Carestia, M.; Peluso, E.; Bellecci, C. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Salgado, L. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Video Processing and Understanding Laboratory, Universidad Autónoma de Madrid (Spain); Gaudio, P. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy)

    2014-10-15

    Highlights: •Use of an experimental facility, STARDUST, to analyze the dust resuspension problem inside the tokamak in case of loss of vacuum accident. •PIV technique implementation to track the dust during a LOVA reproduction inside STARDUST. •Data imaging techniques to analyze dust velocity field: first results and data discussion. -- Abstract: An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values

  3. Optical Trapping Techniques Applied to the Study of Cell Membranes

    Science.gov (United States)

    Morss, Andrew J.

    Optical tweezers allow for manipulating micron-sized objects using pN level optical forces. In this work, we use an optical trapping setup to aid in three separate experiments, all related to the physics of the cellular membrane. In the first experiment, in conjunction with Brian Henslee, we use optical tweezers to allow for precise positioning and control of cells in suspension to evaluate the cell size dependence of electroporation. Theory predicts that all cells porate at a transmembrane potential VTMof roughly 1 V. The Schwann equation predicts that the transmembrane potential depends linearly on the cell radius r, thus predicting that cells should porate at threshold electric fields that go as 1/r. The threshold field required to induce poration is determined by applying a low voltage pulse to the cell and then applying additional pulses of greater and greater magnitude, checking for poration at each step using propidium iodide dye. We find that, contrary to expectations, cells do not porate at a constant value of the transmembrane potential but at a constant value of the electric field which we find to be 692 V/cm for K562 cells. Delivering precise dosages of nanoparticles into cells is of importance for assessing toxicity of nanoparticles or for genetic research. In the second experiment, we conduct nano-electroporation—a novel method of applying precise doses of transfection agents to cells—by using optical tweezers in conjunction with a confocal microscope to manipulate cells into contact with 100 nm wide nanochannels. This work was done in collaboration with Pouyan Boukany of Dr. Lee's group. The small cross sectional area of these nano channels means that the electric field within them is extremely large, 60 MV/m, which allows them to electrophoretically drive transfection agents into the cell. We find that nano electroporation results in excellent dose control (to within 10% in our experiments) compared to bulk electroporation. We also find that

  4. Time-resolved infrared spectroscopic techniques as applied to Channelrhodopsin

    Directory of Open Access Journals (Sweden)

    Eglof eRitter

    2015-07-01

    Full Text Available Among optogenetic tools, channelrhodopsins, the light gated ion channels of the plasma membrane from green algae, play the most important role. Properties like channel selectivity, timing parameters or color can be influenced by the exchange of selected amino acids. Although widely used, in the field of neurosciences for example, there is still little known about their photocycles and the mechanism of ion channel gating and conductance. One of the preferred methods for these studies is infrared spectroscopy since it allows observation of proteins and their function at a molecular level and in near-native environment. The absorption of a photon in channelrhodopsin leads to retinal isomerization within femtoseconds, the conductive states are reached in the microsecond time scale and the return into the fully dark-adapted state may take more than minutes. To be able to cover all these time regimes, a range of different spectroscopical approaches are necessary. This mini-review focuses on time-resolved applications of the infrared technique to study channelrhodopsins and other light triggered proteins. We will discuss the approaches with respect to their suitability to the investigation of channelrhodopsin and related proteins.

  5. [Applying DNA barcoding technique to identify menthae haplocalycis herba].

    Science.gov (United States)

    Pang, Xiaohui; Xu, Haibin; Han, Jianping; Song, Jingyuan

    2012-04-01

    To identify Menthae Haplocalycis Herba and its closely related species using DNA barcoding technique. Total genomic DNA was isolated from Mentha canadensis and its closely related species. Nuclear DNA ITS2 sequences were amplified, and purified PCR products were sequenced. Sequence assembly and consensus sequence generation were performed using the CodonCode Aligner V3.0. The Kimura 2-Parameter (K2P) distances were calculated using software MEGA 5.0. Identification analyses were performed using BLAST1, Nearest Distance and neighbor-joining (NJ) methods. The intra-specific genetic distances of M. canadensis were ranged from 0 to 0.006, which were lower than inter-specific genetic distances between M. canadensis and its closely related species (0.071-0.231). All the three methods showed that ITS2 could discriminate M. canadensis from its closely related species correctly. The ITS2 region is an efficient barcode for identification of Menthae Haplocalycis Herba, which provides a scientific basis for fast and accurate identification of the herb.

  6. Digital prototyping technique applied for redesigning plastic products

    Science.gov (United States)

    Pop, A.; Andrei, A.

    2015-11-01

    After products are on the market for some time, they often need to be redesigned to meet new market requirements. New products are generally derived from similar but outdated products. Redesigning a product is an important part of the production and development process. The purpose of this paper is to show that using modern technology, like Digital Prototyping in industry is an effective way to produce new products. This paper tries to demonstrate and highlight the effectiveness of the concept of Digital Prototyping, both to reduce the design time of a new product, but also the costs required for implementing this step. The results of this paper show that using Digital Prototyping techniques in designing a new product from an existing one available on the market mould offers a significantly manufacturing time and cost reduction. The ability to simulate and test a new product with modern CAD-CAM programs in all aspects of production (designing of the 3D model, simulation of the structural resistance, analysis of the injection process and beautification) offers a helpful tool for engineers. The whole process can be realised by one skilled engineer very fast and effective.

  7. SPI Trend Analysis of New Zealand Applying the ITA Technique

    Directory of Open Access Journals (Sweden)

    Tommaso Caloiero

    2018-03-01

    Full Text Available A natural temporary imbalance of water availability, consisting of persistent lower-than-average or higher-than-average precipitation, can cause extreme dry and wet conditions that adversely impact agricultural yields, water resources, infrastructure, and human systems. In this study, dry and wet periods in New Zealand were expressed using the Standardized Precipitation Index (SPI. First, both the short term (3 and 6 months and the long term (12 and 24 months SPI were estimated, and then, possible trends in the SPI values were detected by means of a new graphical technique, the Innovative Trend Analysis (ITA, which allows the trend identification of the low, medium, and high values of a series. Results show that, in every area currently subject to drought, an increase in this phenomenon can be expected. Specifically, the results of this paper highlight that agricultural regions on the eastern side of the South Island, as well as the north-eastern regions of the North Island, are the most consistently vulnerable areas. In fact, in these regions, the trend analysis mainly showed a general reduction in all the values of the SPI: that is, a tendency toward heavier droughts and weaker wet periods.

  8. Applying inversion techniques to understanding nucleus-nucleus potentials

    International Nuclear Information System (INIS)

    Mackintosh, R.S.; Cooper, S.G.

    1996-01-01

    The iterative-perturbative (IP) inversion algorithm makes it possible to determine, essentially uniquely, the complex potential, including spin-orbit component, for spin half particles given the elastic scattering S-matrix S lj . We here report an extension of the method to the determination of energy dependent potentials V(r,E) defined over an energy range for which S lj (E) are provided. This is a natural development of the IP algorithm which has previously been applied to fixed energy, fixed partial wave and the intermediate mixed case inversion. The energy range can include negative energies i.e. V(r,E) can reproduce bound state energies. It can also fit the effective range parameter for low energy scattering. We briefly define the classes of cases which can be studied, outline the IP method itself and briefly review the range of applications. We show the power of the method by presenting nucleon-αV(r,E) for S lj (E) derived from experiments above and below the inelastic threshold and relating them to V(r,E) inverted from S lj (E) for RGM theory. Reference is given to the code IMAGO which embodies the IP algorithm. (author). 38 refs., 5 figs., 4 tabs

  9. Hyperspectral imaging based techniques applied to polluted clay characterization

    Science.gov (United States)

    Bonifazi, Giuseppe; Serranti, Silvia

    2006-10-01

    Polluted soils analysis and characterization is one of the basic step to perform in order to collect all the information to design and set-up correct soil reclamation strategies. Soil analysis is usually performed through "in-situ" sampling and laboratory analysis. Such an approach is usually quite expensive and does not allow to reach a direct and detailed knowledge of large areas for the intrinsic limits (high costs) linked to direct sampling and polluting elements detection. As a consequence numerical strategies are applied to extrapolate, starting from a discrete set of data, that is those related to collected samples, information about the contamination level of areas not directly interested by physical sampling. These models are usually very difficult to handle both for the intrinsic variability characterizing the media (soils) and for the high level of interactions between polluting agents, soil characteristics (organic matter content, size class distribution of the inorganic fraction, composition, etc.) and environmental conditions (temperature, humidity, presence of vegetation, human activities, etc.). Aim of this study, starting from previous researches addressed to evaluate the potentialities of hyperspectral imaging approach in polluting soil characterization, was to evaluate the results obtainable in the investigation of an "ad hoc" polluted benthonic clay, usually utilized in rubbish dump, in order to define fast and reliable control strategies addressed to monitor the status of such a material in terms of insulation.

  10. Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.

    Science.gov (United States)

    Song, Xuegang; Zhang, Yuexin; Liang, Dakai

    2017-10-10

    This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.

  11. Using digital filtering techniques as an aid in wind turbine data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Young, T. [Univ. of Colorado, Boulder, CO (United States). BioServe Space Technologies

    1994-11-01

    Research involving very large sets of digital data is often difficult due to the enormity of the database. In the case of a wind turbine operating under varying environmental conditions, determining which data are representative of the blade aerodynamics and which are due to transient flow ingestion effects or errors in instrumentation, operation, and data collection is of primary concern to researchers. The National Renewable Energy Laboratory in Golden, Colorado collected extensive data on a downwind horizontal axis wind turbine (HAWT) during a turbine test project called the Combined Experiment. A principal objective of this experiment was to provide a means to predict HAWT aerodynamic, mechanical, and electrical operational loads based upon analytical models of aerodynamic performance related to blade design and inflow conditions. In a collaborative effort with the Aerospace Engineering Department at the University of Colorado at Boulder, a team of researchers has evolved and utilized various digital filtering techniques in analyzing the data from the Combined Experiment. A preliminary analysis of the data set was performed to determine how to best approach the data. The reduced data set emphasized selection of inflow conditions such that the aerodynamic data could be compared directly to wind tunnel data obtained for the same airfoil design as used for the HAWT`s blades. It will be shown that this reduced data set has yielded valid, reproducible results that a simple averaging technique or a random selection approach cannot achieve. These findings provide a stable baseline against which operational HAWT data can be compared.

  12. A Kalman Filter Based Technique for Stator Turn-Fault Detection of the Induction Motors

    Science.gov (United States)

    Ghanbari, Teymoor; Samet, Haidar

    2017-11-01

    Monitoring of the Induction Motors (IMs) through stator current for different faults diagnosis has considerable economic and technical advantages in comparison with the other techniques in this content. Among different faults of an IM, stator and bearing faults are more probable types, which can be detected by analyzing signatures of the stator currents. One of the most reliable indicators for fault detection of IMs is lower sidebands of power frequency in the stator currents. This paper deals with a novel simple technique for detecting stator turn-fault of the IMs. Frequencies of the lower sidebands are determined using the motor specifications and their amplitudes are estimated by a Kalman Filter (KF). Instantaneous Total Harmonic Distortion (ITHD) of these harmonics is calculated. Since variation of the ITHD for the three-phase currents is considerable in case of stator turn-fault, the fault can be detected using this criterion, confidently. Different simulation results verify high performance of the proposed method. The performance of the method is also confirmed using some experiments.

  13. Applying data mining techniques to improve diagnosis in neonatal jaundice

    Directory of Open Access Journals (Sweden)

    Ferreira Duarte

    2012-12-01

    Full Text Available Abstract Background Hyperbilirubinemia is emerging as an increasingly common problem in newborns due to a decreasing hospital length of stay after birth. Jaundice is the most common disease of the newborn and although being benign in most cases it can lead to severe neurological consequences if poorly evaluated. In different areas of medicine, data mining has contributed to improve the results obtained with other methodologies. Hence, the aim of this study was to improve the diagnosis of neonatal jaundice with the application of data mining techniques. Methods This study followed the different phases of the Cross Industry Standard Process for Data Mining model as its methodology. This observational study was performed at the Obstetrics Department of a central hospital (Centro Hospitalar Tâmega e Sousa – EPE, from February to March of 2011. A total of 227 healthy newborn infants with 35 or more weeks of gestation were enrolled in the study. Over 70 variables were collected and analyzed. Also, transcutaneous bilirubin levels were measured from birth to hospital discharge with maximum time intervals of 8 hours between measurements, using a noninvasive bilirubinometer. Different attribute subsets were used to train and test classification models using algorithms included in Weka data mining software, such as decision trees (J48 and neural networks (multilayer perceptron. The accuracy results were compared with the traditional methods for prediction of hyperbilirubinemia. Results The application of different classification algorithms to the collected data allowed predicting subsequent hyperbilirubinemia with high accuracy. In particular, at 24 hours of life of newborns, the accuracy for the prediction of hyperbilirubinemia was 89%. The best results were obtained using the following algorithms: naive Bayes, multilayer perceptron and simple logistic. Conclusions The findings of our study sustain that, new approaches, such as data mining, may support

  14. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    Science.gov (United States)

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  15. Applying perceptual and adaptive learning techniques for teaching introductory histopathology

    Directory of Open Access Journals (Sweden)

    Sally Krasne

    2013-01-01

    Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

  16. Evaluation of Moving Average Window Technique as Low-pass Filter in Microprocessor-Based Protecting Relays

    Directory of Open Access Journals (Sweden)

    N. Khodabakhshi-Javinani

    2017-12-01

    Full Text Available Over the last decades, with the increase in the use of harmonic source devices, the filtering process has received more attention than ever before. Digital relays operate according to accurate thresholds and precise setting values. In signal flow graphs of relays, the low-pass filter plays a crucial role in pre-filtering and purifying waveforms performance estimating techniques to estimate the expected impedances, currents, voltage etc. The main process is conducted in the CPU through methods such as Man and Morrison, Fourier, Walsh-based techniques, least-square methods etc. To purify waveforms polluted with low-order harmonics, it is necessary to design and embed cutting frequency in a narrow band which would be costly. In this article, a technique is presented which is able to eliminate specified harmonics, noise and DC offset, attenuate whole harmonic order and hand low-pass filtered signals to CPU. The proposed method is evaluated by eight case studies and compared with first and second order low-pass filter.

  17. Quantifying Methane Flux from a Prominent Seafloor Crater with Water Column Imagery Filtering and Bubble Quantification Techniques

    Science.gov (United States)

    Mitchell, G. A.; Gharib, J. J.; Doolittle, D. F.

    2015-12-01

    Methane gas flux from the seafloor to atmosphere is an important variable for global carbon cycle and climate models, yet is poorly constrained. Methodologies used to estimate seafloor gas flux commonly employ a combination of acoustic and optical techniques. These techniques often use hull-mounted multibeam echosounders (MBES) to quickly ensonify large volumes of the water column for acoustic backscatter anomalies indicative of gas bubble plumes. Detection of these water column anomalies with a MBES provides information on the lateral distribution of the plumes, the midwater dimensions of the plumes, and their positions on the seafloor. Seafloor plume locations are targeted for visual investigations using a remotely operated vehicle (ROV) to determine bubble emission rates, venting behaviors, bubble sizes, and ascent velocities. Once these variables are measured in-situ, an extrapolation of gas flux is made over the survey area using the number of remotely-mapped flares. This methodology was applied to a geophysical survey conducted in 2013 over a large seafloor crater that developed in response to an oil well blowout in 1983 offshore Papua New Guinea. The site was investigated by multibeam and sidescan mapping, sub-bottom profiling, 2-D high-resolution multi-channel seismic reflection, and ROV video and coring operations. Numerous water column plumes were detected in the data suggesting vigorously active vents within and near the seafloor crater (Figure 1). This study uses dual-frequency MBES datasets (Reson 7125, 200/400 kHz) and ROV video imagery of the active hydrocarbon seeps to estimate total gas flux from the crater. Plumes of bubbles were extracted from the water column data using threshold filtering techniques. Analysis of video images of the seep emission sites within the crater provided estimates on bubble size, expulsion frequency, and ascent velocity. The average gas flux characteristics made from ROV video observations is extrapolated over the number

  18. GPR image analysis to locate water leaks from buried pipes by applying variance filters

    Science.gov (United States)

    Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín

    2018-05-01

    Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.

  19. Time-series-analysis techniques applied to nuclear-material accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-05-01

    This document is designed to introduce the reader to the applications of Time Series Analysis techniques to Nuclear Material Accountability data. Time series analysis techniques are designed to extract information from a collection of random variables ordered by time by seeking to identify any trends, patterns, or other structure in the series. Since nuclear material accountability data is a time series, one can extract more information using time series analysis techniques than by using other statistical techniques. Specifically, the objective of this document is to examine the applicability of time series analysis techniques to enhance loss detection of special nuclear materials. An introductory section examines the current industry approach which utilizes inventory differences. The error structure of inventory differences is presented. Time series analysis techniques discussed include the Shewhart Control Chart, the Cumulative Summation of Inventory Differences Statistics (CUSUM) and the Kalman Filter and Linear Smoother

  20. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form.

    Science.gov (United States)

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2016-02-05

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Water-filtered infrared-A (wIRA can act as a penetration enhancer for topically applied substances

    Directory of Open Access Journals (Sweden)

    Sterry, Wolfram

    2008-07-01

    Full Text Available Background: Water-filtered infrared-A (wIRA irradiation has been shown to enhance penetration of clinically used topically applied substances in humans through investigation of functional effects of penetrated substances like vasoconstriction by cortisone. Aim of the study: Investigation of the influence of wIRA irradiation on the dermatopharmacokinetics of topically applied substances by use of optical methods, especially to localize penetrating substances, in a prospective randomised controlled study in humans. Methods: The penetration profiles of the hydrophilic dye fluorescein and the lipophilic dye curcumin in separate standard water-in-oil emulsions were determined on the inner forearm of test persons by tape stripping in combination with spectroscopic measurements. Additionally, the penetration was investigated in vivo by laser scanning microscopy. Transepidermal water loss, hydration of the epidermis, and surface temperature were determined. Three different procedures (modes A, B, C were used in a randomised order on three separate days of investigation in each of 12 test persons. In mode A, the two dyes were applied on different skin areas without water-filtered infrared-A (wIRA irradiation. In mode B, the skin surface was irradiated with wIRA over 30 min before application of the two dyes (Hydrosun® radiator type 501, 10 mm water cuvette, orange filter OG590, water-filtered spectrum: 590–1400 nm with dominant amount of wIRA. In mode C, the two dyes were applied and immediately afterwards the skin was irradiated with wIRA over 30 min. In all modes, tape stripping started 30 min after application of the formulations. Main variable of interest was the ratio of the amount of the dye in the deeper (second 10% of the stratum corneum to the amount of the dye in the upper 10% of the stratum corneum. Results: The penetration profiles of the hydrophilic fluorescein showed in case of pretreatment or treatment with wIRA (modes B and C an

  2. Applying Cooperative Localization to Swarm UAVS Using an Extended Kalman Filter

    Science.gov (United States)

    2014-09-01

    a few, have dedicated their efforts to understanding and modeling this behavior so that it can be applied to many other diverse tasks and complex...Insect self-organization is robust, adaptive, and persistent, as anyone can attest who has tried to keep ants out of the kitchen or defeat a termite

  3. Flexible Riser Monitoring Using Hybrid Magnetic/Optical Strain Gage Techniques through RLS Adaptive Filtering

    Science.gov (United States)

    Pipa, Daniel; Morikawa, Sérgio; Pires, Gustavo; Camerini, Claudio; Santos, JoãoMárcio

    2010-12-01

    Flexible riser is a class of flexible pipes which is used to connect subsea pipelines to floating offshore installations, such as FPSOs (floating production/storage/off-loading unit) and SS (semisubmersible) platforms, in oil and gas production. Flexible risers are multilayered pipes typically comprising an inner flexible metal carcass surrounded by polymer layers and spiral wound steel ligaments, also referred to as armor wires. Since these armor wires are made of steel, their magnetic properties are sensitive to the stress they are subjected to. By measuring their magnetic properties in a nonintrusive manner, it is possible to compare the stress in the armor wires, thus allowing the identification of damaged ones. However, one encounters several sources of noise when measuring electromagnetic properties contactlessly, such as movement between specimen and probe, and magnetic noise. This paper describes the development of a new technique for automatic monitoring of armor layers of flexible risers. The proposed approach aims to minimize these current uncertainties by combining electromagnetic measurements with optical strain gage data through a recursive least squares (RLSs) adaptive filter.

  4. Flexible Riser Monitoring Using Hybrid Magnetic/Optical Strain Gage Techniques through RLS Adaptive Filtering

    Directory of Open Access Journals (Sweden)

    Pipa Daniel

    2010-01-01

    Full Text Available Flexible riser is a class of flexible pipes which is used to connect subsea pipelines to floating offshore installations, such as FPSOs (floating production/storage/off-loading unit and SS (semisubmersible platforms, in oil and gas production. Flexible risers are multilayered pipes typically comprising an inner flexible metal carcass surrounded by polymer layers and spiral wound steel ligaments, also referred to as armor wires. Since these armor wires are made of steel, their magnetic properties are sensitive to the stress they are subjected to. By measuring their magnetic properties in a nonintrusive manner, it is possible to compare the stress in the armor wires, thus allowing the identification of damaged ones. However, one encounters several sources of noise when measuring electromagnetic properties contactlessly, such as movement between specimen and probe, and magnetic noise. This paper describes the development of a new technique for automatic monitoring of armor layers of flexible risers. The proposed approach aims to minimize these current uncertainties by combining electromagnetic measurements with optical strain gage data through a recursive least squares (RLSs adaptive filter.

  5. Direct and Inverse Techniques of Guided-Mode Resonance Filters Designs

    Science.gov (United States)

    Tibuleac, Sorin; Magnusson, Robert; Maldonado, Theresa A.; Zuffada, Cinzia

    1997-01-01

    Guided-mode resonances arise in single or multilayer waveguides where one or more homogeneous layers are replaced by diffraction gratings (Fig. 1.) The diffractive element enables an electromagnetic wave incident on a waveguide grating to be coupled to the waveguide modes supportable by the structure in the absence of the modulation (i.e. the difference between the high and low dielectric constants of the grating) at specific values of the wavelength and incident angle. The periodic modulation of the guide makes the structure leaky, preventing sustained propagation of modes in the waveguide and coupling the waves out into the substrate and cover. As the wavelength is varied around resonance a rapid variation in the intensities of the external propagating waves occurs. By selecting a grating period small enough to eliminate the higher-order propagating waves, an increase in the zero-order intensities up to 100% can result. The pronounced frequency selectivity of guided-mode resonances in dielectric waveguide gratings can be applied to design high-efficiency reflection and transmission filters [1-3].

  6. 31 CFR 205.11 - What requirements apply to funding techniques?

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What requirements apply to funding techniques? 205.11 Section 205.11 Money and Finance: Treasury Regulations Relating to Money and Finance... Treasury-State Agreement § 205.11 What requirements apply to funding techniques? (a) A State and a Federal...

  7. 3D filtering technique in presence of additive noise in color videos implemented on DSP

    Science.gov (United States)

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Palacios, Alfredo

    2014-05-01

    A filtering method for color videos contaminated by additive noise is presented. The proposed framework employs three filtering stages: spatial similarity filtering, neighboring frame denoising, and spatial post-processing smoothing. The difference with other state-of- the-art filtering methods, is that this approach, based on fuzzy logic, analyses basic and related gradient values between neighboring pixels into a 7 fi 7 sliding window in the vicinity of a central pixel in each of the RGB channels. Following, the similarity measures between the analogous pixels in the color bands are taken into account during the denoising. Next, two neighboring video frames are analyzed together estimating local motions between the frames using block matching procedure. In the final stage, the edges and smoothed areas are processed differently in a current frame during the post-processing filtering. Numerous simulations results confirm that this 3D fuzzy filter perform better than other state-of-the- art methods, such as: 3D-LLMMSE, WMVCE, RFMDAF, FDARTF G, VBM3D and NLM, in terms of objective criteria (PSNR, MAE, NCD and SSIM) as well as subjective perception via human vision system in the different color videos. An efficiency analysis of the designed and other mentioned filters have been performed on the DSPs TMS320 DM642 and TMS320DM648 by Texas Instruments through MATLAB and Simulink module showing that the novel 3D fuzzy filter can be used in real-time processing applications.

  8. Active Damping Techniques for LCL-Filtered Inverters-Based Microgrids

    DEFF Research Database (Denmark)

    Lorzadeh, Iman; Firoozabadi, Mehdi Savaghebi; Askarian Abyaneh, Hossein

    2015-01-01

    LCL-type filters are widely used in gridconnected voltage source inverters, since it provides switching ripples reduction with lower cost and weight than the L-type counterpart. However, the inclusion of LCL-filters in voltage source inverters complicates the current control design regarding syst...

  9. A CMOS transconductance-C filter technique for very high frequencies

    NARCIS (Netherlands)

    Nauta, Bram

    1992-01-01

    CMOS circuits for integrated analog filters at very high frequencies, based on transconductance-C integrators, are presented. First a differential transconductance element based on CMOS inverters is described. With this circuit a linear, tunable integrator for very-high-frequency integrated filters

  10. An Approach for Synthesis of Modulated M-Channel FIR Filter Banks Utilizing the Frequency-Response Masking Technique

    Directory of Open Access Journals (Sweden)

    Håkan Johansson

    2007-01-01

    Full Text Available The frequency-response masking (FRM technique was introduced as a means of generating linear-phase FIR filters with narrow transition band and low arithmetic complexity. This paper proposes an approach for synthesizing modulated maximally decimated FIR filter banks (FBs utilizing the FRM technique. A new tailored class of FRM filters is introduced and used for synthesizing nonlinear-phase analysis and synthesis filters. Each of the analysis and synthesis FBs is realized with the aid of only three subfilters, one cosine-modulation block, and one sine-modulation block. The overall FB is a near-perfect reconstruction (NPR FB which in this case means that the distortion function has a linear-phase response but small magnitude errors. Small aliasing errors are also introduced by the FB. However, by allowing these small errors (that can be made arbitrarily small, the arithmetic complexity can be reduced. Compared to conventional cosine-modulated FBs, the proposed ones lower significantly the overall arithmetic complexity at the expense of a slightly increased overall FB delay in applications requiring narrow transition bands. Compared to other proposals that also combine cosine-modulated FBs with the FRM technique, the arithmetic complexity can typically be reduced by 40% in specifications with narrow transition bands. Finally, a general design procedure is given for the proposed FBs and examples are included to illustrate their benefits.

  11. An Approach for Synthesis of Modulated -Channel FIR Filter Banks Utilizing the Frequency-Response Masking Technique

    Directory of Open Access Journals (Sweden)

    Rosenbaum Linnéa

    2007-01-01

    Full Text Available The frequency-response masking (FRM technique was introduced as a means of generating linear-phase FIR filters with narrow transition band and low arithmetic complexity. This paper proposes an approach for synthesizing modulated maximally decimated FIR filter banks (FBs utilizing the FRM technique. A new tailored class of FRM filters is introduced and used for synthesizing nonlinear-phase analysis and synthesis filters. Each of the analysis and synthesis FBs is realized with the aid of only three subfilters, one cosine-modulation block, and one sine-modulation block. The overall FB is a near-perfect reconstruction (NPR FB which in this case means that the distortion function has a linear-phase response but small magnitude errors. Small aliasing errors are also introduced by the FB. However, by allowing these small errors (that can be made arbitrarily small, the arithmetic complexity can be reduced. Compared to conventional cosine-modulated FBs, the proposed ones lower significantly the overall arithmetic complexity at the expense of a slightly increased overall FB delay in applications requiring narrow transition bands. Compared to other proposals that also combine cosine-modulated FBs with the FRM technique, the arithmetic complexity can typically be reduced by in specifications with narrow transition bands. Finally, a general design procedure is given for the proposed FBs and examples are included to illustrate their benefits.

  12. Effectiveness of Three Decontamination Treatments Against Influenza Virus Applied to Filtering Facepiece Respirators

    Science.gov (United States)

    2010-10-01

    generated steam and moist heat) on two NIOSH certified N95 FFRs contaminated with H5N1. An aerosol settling chamber was used to apply virus-laden droplets to...use FFRs. N95 respirator; decontamination; respirator reuse; influenza virus; healthcare workers; bioaerosol U U U UU 11 Joseph D. Wander, PhD Reset...microwave- generated steam, and moist heat] on two National Institute for Occupational Safety and Health-certified N95 FFRs (3M models 1860s and 1870

  13. Improving Skill Development: An Exploratory Study Comparing a Philosophical and an Applied Ethical Analysis Technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-01-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of…

  14. SU-F-I-73: Surface Dose from KV Diagnostic Beams From An On-Board Imager On a Linac Machine Using Different Imaging Techniques and Filters

    Energy Technology Data Exchange (ETDEWEB)

    Ali, I; Hossain, S; Syzek, E; Ahmad, S [University of Oklahoma Health Sciences Center, Department of Radiation Oncology, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: To quantitatively investigate the surface dose deposited in patients imaged with a kV on-board-imager mounted on a radiotherapy machine using different clinical imaging techniques and filters. Methods: A high sensitivity photon diode is used to measure the surface dose on central-axis and at an off-axis-point which is mounted on the top of a phantom setup. The dose is measured for different imaging techniques that include: AP-Pelvis, AP-Head, AP-Abdomen, AP-Thorax, and Extremity. The dose measurements from these imaging techniques are combined with various filtering techniques that include: no-filter (open-field), half-fan bowtie (HF), full-fan bowtie (FF) and Cu-plate filters. The relative surface dose for different imaging and filtering techniques is evaluated quantiatively by the ratio of the dose relative to the Cu-plate filter. Results: The lowest surface dose is deposited with the Cu-plate filter. The highest surface dose deposited results from open fields without filter and it is nearly a factor of 8–30 larger than the corresponding imaging technique with the Cu-plate filter. The AP-Abdomen technique delivers the largest surface dose that is nearly 2.7 times larger than the AP-Head technique. The smallest surface dose is obtained from the Extremity imaging technique. Imaging with bowtie filters decreases the surface dose by nearly 33% in comparison with the open field. The surface doses deposited with the HF or FF-bowtie filters are within few percentages. Image-quality of the radiographic images obtained from the different filtering techniques is similar because the Cu-plate eliminates low-energy photons. The HF- and FF-bowtie filters generate intensity-gradients in the radiographs which affects image-quality in the different imaging technique. Conclusion: Surface dose from kV-imaging decreases significantly with the Cu-plate and bowtie-filters compared to imaging without filters using open-field beams. The use of Cu-plate filter does not affect

  15. A Novel Technique Using a Protection Filter During Fibrin Sheath Removal for Implanted Venous Access Device Dysfunction

    Energy Technology Data Exchange (ETDEWEB)

    Sotiriadis, Charalampos; Hajdu, Steven David [University Hospital of Lausanne, Cardiothoracic and Vascular Unit, Department of Radiology (Switzerland); Degrauwe, Sophie [University Hospital of Lausanne, Department of Cardiology (Switzerland); Barras, Heloise; Qanadli, Salah Dine, E-mail: salah.qanadli@chuv.ch [University Hospital of Lausanne, Cardiothoracic and Vascular Unit, Department of Radiology (Switzerland)

    2016-08-15

    With the increased use of implanted venous access devices (IVADs) for continuous long-term venous access, several techniques such as percutaneous endovascular fibrin sheath removal, have been described, to maintain catheter function. Most standard techniques do not capture the stripped fibrin sheath, which is subsequently released in the pulmonary circulation and may lead to symptomatic pulmonary embolism. The presented case describes an endovascular technique which includes stripping, capture, and removal of fibrin sheath using a novel filter device. A 64-year-old woman presented with IVAD dysfunction. Stripping was performed using a co-axial snare to the filter to capture the fibrin sheath. The captured fragment was subsequently removed for visual and pathological verification. No immediate complication was observed and the patient was discharged the day of the procedure.

  16. Deconvolution of alpha spectra from air filters applied for measurements of the short-lived radon progeny concentration

    Directory of Open Access Journals (Sweden)

    Skubacz Krystian

    2017-09-01

    Full Text Available The paper contains a description of a method for the analysis of the complex alpha spectra generated during the measurement of the activity of filters outside of a vacuum chamber under environmental conditions. The peaks corresponding to the energies of alpha particles emitted by the specific isotopes are particularly large on the low-energy side of the peak maximum, and the energy resolution strongly depended on the applied filters. The analysis was based on the non-linear regression to a function designed for four, six and eight parameters. Satisfactory results were obtained for each of these functions, and the best-fitting results were achieved for the eight-parameter function. In addition, the uncertainties related to the estimated parameters, as well as the signals corresponding to functions that describe the shape of the energy peak, have been evaluated. There are also examples of the implementation of the method with respect to short-lived radon progeny and thoron decay products.

  17. Data smoothing techniques applied to proton microprobe scans of teleost hard parts

    International Nuclear Information System (INIS)

    West, I.F.; Gauldie, R.W.; Coote, G.E.

    1992-01-01

    We use a proton microprobe to examine the distribution of elements in otoliths and scales of teleost (bony) fish. The elements of principal interest are calcium and strontium in otoliths and calcium and fluorine in scales. Changes in the distribution of these elements across hard structures may allow inferences about the life histories of fish. Otoliths and scales of interest are up to a centimeter in linear dimension and to reveal the structures of interest up to 200 sampling points are required in each dimension. The time needed to accumulate high X-ray counts at each sampling point can be large, particularly for strontium. To reduce microprobe usage we use data smoothing techniques to reveal changing patterns with modest X-ray count accumulations at individual data points. In this paper we review performance for revealing pattern at modest levels of X-ray count accumulations of a selection of digital filters (moving average smoothers), running median filters, robust locally weighted regression filters and adaptive spline filters. (author)

  18. Study of 1D complex resistivity inversion using digital linear filter technique; Linear filter ho wo mochiita fukusohi teiko no gyakukaisekiho no kento

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, K.; Shima, H. [OYO Corp., Tokyo (Japan)

    1996-10-01

    This paper proposes a modeling method of one-dimensional complex resistivity using linear filter technique which has been extended to the complex resistivity. In addition, a numerical test of inversion was conducted using the monitoring results, to discuss the measured frequency band. Linear filter technique is a method by which theoretical potential can be calculated for stratified structures, and it is widely used for the one-dimensional analysis of dc electrical exploration. The modeling can be carried out only using values of complex resistivity without using values of potential. In this study, a bipolar method was employed as a configuration of electrodes. The numerical test of one-dimensional complex resistivity inversion was conducted using the formulated modeling. A three-layered structure model was used as a numerical model. A multi-layer structure with a thickness of 5 m was analyzed on the basis of apparent complex resistivity calculated from the model. From the results of numerical test, it was found that both the chargeability and the time constant agreed well with those of the original model. A trade-off was observed between the chargeability and the time constant at the stage of convergence. 3 refs., 9 figs., 1 tab.

  19. Belavkin filter for mixture of quadrature and photon counting process with some control techniques

    Science.gov (United States)

    Garg, Naman; Parthasarathy, Harish; Upadhyay, D. K.

    2018-03-01

    The Belavkin filter for the H-P Schrödinger equation is derived when the measurement process consists of a mixture of quantum Brownian motions and conservation/Poisson process. Higher-order powers of the measurement noise differentials appear in the Belavkin dynamics. For simulation, we use a second-order truncation. Control of the Belavkin filtered state by infinitesimal unitary operators is achieved in order to reduce the noise effects in the Belavkin filter equation. This is carried out along the lines of Luc Bouten. Various optimization criteria for control are described like state tracking and Lindblad noise removal.

  20. Detection of irradiated poultry products using the direct epifluorescence filter technique

    International Nuclear Information System (INIS)

    Copin, M.P.; Bourgeois, C.

    1992-01-01

    Food irradiation has developed during the last few years. Nevertheless this development would be larger if there was a recognized method to detect whether a foodstuff had been irradiated. BETTS et al. (1988) suggested a method based on the comparison of an aerobic plate count (APC) with a count obtained using the Direct Epifluorescence Filter Technique (DEFT). They showed that the APC of an irradiated product was considerably lower than that obtained by the DEFT; in this case the DEFT count gave an indication of the number of viable microbial population in the product before irradiation; the APC of a non irradiated product was very well correlated with the DEFT count. In the present work both methods were tested on deep frozen mechanically deboned chicken meat (MDCM) and fresh chicken meat. The fluorochrome used for the DEFT was acridine orange; the mesophilic microflora was counted on 'Plate Count Agar'. According to the results obtained with the deep frozen MDCM, aerobic plate counts and DEFT counts are very similar during 100 days of storage when the product has not been irradiated; if it has been irradiated the difference between the two counts is high (about two logarithmic units). With this method it is thus possible to detect an irradiated product and to know the number of viable microbial cells in the irradiated product before the treatment. The method was tested on fresh chicken meat stored at 4 deg C. At the beginning of the storage period, it is possible to detect irradiated products, but at the end the method fails. In the latter case, irradiation can be detected, but it would be impossible to say that a product had not been irradiated. This method is potentially applicable to deep frozen products, more than to fresh products

  1. Comparison analysis between filtered back projection and algebraic reconstruction technique on microwave imaging

    Science.gov (United States)

    Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari

    2018-02-01

    Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.

  2. Complex Retrieval of Embedded IVC Filters: Alternative Techniques and Histologic Tissue Analysis

    International Nuclear Information System (INIS)

    Kuo, William T.; Cupp, John S.; Louie, John D.; Kothary, Nishita; Hofmann, Lawrence V.; Sze, Daniel Y.; Hovsepian, David M.

    2012-01-01

    Purpose: We evaluated the safety and effectiveness of alternative endovascular methods to retrieve embedded optional and permanent filters in order to manage or reduce risk of long-term complications from implantation. Histologic tissue analysis was performed to elucidate the pathologic effects of chronic filter implantation. Methods: We studied the safety and effectiveness of alternative endovascular methods for removing embedded inferior vena cava (IVC) filters in 10 consecutive patients over 12 months. Indications for retrieval were symptomatic chronic IVC occlusion, caval and aortic perforation, and/or acute PE (pulmonary embolism) from filter-related thrombus. Retrieval was also performed to reduce risk of complications from long-term filter implantation and to eliminate the need for lifelong anticoagulation. All retrieved specimens were sent for histologic analysis. Results: Retrieval was successful in all 10 patients. Filter types and implantation times were as follows: one Venatech (1,495 days), one Simon-Nitinol (1,485 days), one Optease (300 days), one G2 (416 days), five Günther-Tulip (GTF; mean 606 days, range 154–1,010 days), and one Celect (124 days). There were no procedural complications or adverse events at a mean follow-up of 304 days after removal (range 196–529 days). Histology revealed scant native intima surrounded by a predominance of neointimal hyperplasia and dense fibrosis in all specimens. Histologic evidence of photothermal tissue ablation was confirmed in three laser-treated specimens. Conclusion: Complex retrieval methods can now be used in select patients to safely remove embedded optional and permanent IVC filters previously considered irretrievable. Neointimal hyperplasia and dense fibrosis are the major components that must be separated to achieve successful retrieval of chronic filter implants.

  3. Effect of application rates and media types on nitrogen and surfactant removal in trickling filters applied to the post-treatment of effluents from UASB reactors

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, P. G. S. de; Taveres, F. v. F.; Chernicharo, C. A. I.

    2009-07-01

    Tricking filters are a very promising alternative for the post treatment of effluents from UASB reactors treating domestic sewage,especially in developing countries. Although a fair amount of information is already available regarding organic mater removal in this combined system, very little is known in relation to nitrogen and surfactant removal in trickling filters post-UASB reactors. Therefore, the purpose of this study was to evaluate and compare the effect evaluate and compare the effect of different application rates and packing media types on trickling filters applied to the post-treatment of effluents from UASB reactors, regarding the removal of ammonia nitrogen and surfactants. (Author)

  4. Effect of application rates and media types on nitrogen and surfactant removal in trickling filters applied to the post-treatment of effluents from UASB reactors

    International Nuclear Information System (INIS)

    Almeida, P. G. S. de; Taveres, F. v. F.; Chernicharo, C. A. I.

    2009-01-01

    Tricking filters are a very promising alternative for the post treatment of effluents from UASB reactors treating domestic sewage,especially in developing countries. Although a fair amount of information is already available regarding organic mater removal in this combined system, very little is known in relation to nitrogen and surfactant removal in trickling filters post-UASB reactors. Therefore, the purpose of this study was to evaluate and compare the effect evaluate and compare the effect of different application rates and packing media types on trickling filters applied to the post-treatment of effluents from UASB reactors, regarding the removal of ammonia nitrogen and surfactants. (Author)

  5. Growth of silicone-immobilized bacteria on polycarbonate membrane filters, a technique to study microcolony formation under anaerobic conditions

    DEFF Research Database (Denmark)

    Højberg, Ole; Binnerup, S. J.; Sørensen, Jan

    1997-01-01

    A technique was developed to study microcolony formation by silicone- immobilized bacteria on polycarbonate membrane filters under anaerobic conditions. A sudden shift to anaerobiosis was obtained by submerging the filters in medium which was depleted for oxygen by a pure culture of bacteria....... The technique was used to demonstrate that preinduction of nitrate reductase under low-oxygen conditions was necessary for nonfermenting, nitrate-respiring bacteria, e.g., Pseudomonas spp., to cope with a sudden lack of oxygen. In contrast, nitrate-respiring, fermenting bacteria, e.g., Bacillus and Escherichia...... spp, formed microcolonies under anaerobic conditions with or without the presence of nitrate and irrespective of aerobic or anaerobic preculture conditions....

  6. Effect of Coil Current on the Properties of Hydrogenated DLC Coatings Fabricated by Filtered Cathodic Vacuum Arc Technique

    Science.gov (United States)

    Liao, Bin; Ouyang, Xiaoping; Zhang, Xu; Wu, Xianying; Bian, Baoan; Ying, Minju; Jianwu, Liu

    2018-01-01

    We successfully prepared hydrogenated DLC (a-C:H) with a thickness higher than 25 μm on stainless steel using a filtered cathode vacuum arc (FCVA) technique. The structural and mechanical properties of DLC were systematically analyzed using different methods such as x-ray photoelectron spectroscopy, Raman spectroscopy, scanning electron microscopy, Vickers hardness, nanohardness, and friction and wear tests. The effect of coil current on the arc voltage, ion current, and mechanical properties of resultant films was systematically investigated. The novelty of this study is the fabrication of DLC with Vickers hardness higher than 1500 HV, in the meanwhile with the thickness higher than 30 μm through varying the coil current with FCVA technique. The results indicated that the ion current, deposition rate, friction coefficient, and Vickers hardness of DLC were significantly affected by the magnetic field inside the filtered duct.

  7. Comparative Study of two PWM techniques for Three Phase Shunt Hybrid Active Power Filter to Suppress Line Current Harmonics

    OpenAIRE

    SELVAMUTHUKUMARAN Rajasekar; NATARAJAN Muraly; PERIANAYAGAM Ajay-D-VimalRaj; MAHALINGAM Sudhakaran

    2010-01-01

    This paper investigates the performanceand comparison of two pulse-width-modulation (PWM)techniques by employing direct current control strategyapplied to three phase shunt hybrid active power filter(SHAPF). The objective of SHAPF is to eliminate linecurrent harmonics and to incur reactive powercompensation. The direct current control strategy isimplemented using Standard PWM (S-PWM) and aModified PWM (M-WM), in order to compensatecurrent harmonic and reactive power generated bydifferent load...

  8. Design and application of finite impulse response digital filters.

    Science.gov (United States)

    Miller, T R; Sampathkumaran, K S

    1982-01-01

    The finite impulse response (FIR) digital filter is a spatial domain filter with a frequency domain representation. The theory of the FIR filter is presented and techniques are described for designing FIR filters with known frequency response characteristics. Rational design principles are emphasized based on characterization of the imaging system using the modulation transfer function and physical properties of the imaged objects. Bandpass, Wiener, and low-pass filters were designed and applied to 201Tl myocardial images. The bandpass filter eliminates low-frequency image components that represent background activity and high-frequency components due to noise. The Wiener, or minimum mean square error filter 'sharpens' the image while also reducing noise. The Wiener filter illustrates the power of the FIR technique to design filters with any desired frequency response. The low-pass filter, while of relative limited use, is presented to compare it with a popular elementary 'smoothing' filter.

  9. New technique of leukocytapheresis by the use of nonwoven polyester fiber filter for inflammatory bowel disease.

    Science.gov (United States)

    Kawamura, A; Saitoh, M; Yonekawa, M; Horie, T; Ohizumi, H; Tamaki, T; Kukita, K; Meguro, J

    1999-11-01

    Leukocytapheresis (LCAP) is widely used for the treatment of immunological diseases. We studied a new treatment of LCAP using a nonwoven polyester fiber filter. In a basic study, 30-70% of leukocytes were removed. Also, 30-68% of the leukocyte subsets were removed. Sixteen inflammatory bowel disease (IBD) patients, mainly with ulcerative colitis (UC), were treated by this method. Their cytokine activity was normalized in the filter and in the peripheral blood. Eleven of 12 patients with UC were induced to remission. Four patients with Crohn's disease (CD) exhibited improvement. The LCAP using a nonwoven polyester fiber filter was very efficient for treating the patients with IBD. Also, it will be a very useful treatment for immunological diseases and extracorporeal immunomodulation.

  10. Bioaerosol DNA Extraction Technique from Air Filters Collected from Marine and Freshwater Locations

    Science.gov (United States)

    Beckwith, M.; Crandall, S. G.; Barnes, A.; Paytan, A.

    2015-12-01

    Bioaerosols are composed of microorganisms suspended in air. Among these organisms include bacteria, fungi, virus, and protists. Microbes introduced into the atmosphere can drift, primarily by wind, into natural environments different from their point of origin. Although bioaerosols can impact atmospheric dynamics as well as the ecology and biogeochemistry of terrestrial systems, very little is known about the composition of bioaerosols collected from marine and freshwater environments. The first step to determine composition of airborne microbes is to successfully extract environmental DNA from air filters. We asked 1) can DNA be extracted from quartz (SiO2) air filters? and 2) how can we optimize the DNA yield for downstream metagenomic sequencing? Aerosol filters were collected and archived on a weekly basis from aquatic sites (USA, Bermuda, Israel) over the course of 10 years. We successfully extracted DNA from a subsample of ~ 20 filters. We modified a DNA extraction protocol (Qiagen) by adding a beadbeating step to mechanically shear cell walls in order to optimize our DNA product. We quantified our DNA yield using a spectrophotometer (Nanodrop 1000). Results indicate that DNA can indeed be extracted from quartz filters. The additional beadbeating step helped increase our yield - up to twice as much DNA product was obtained compared to when this step was omitted. Moreover, bioaerosol DNA content does vary across time. For instance, the DNA extracted from filters from Lake Tahoe, USA collected near the end of June decreased from 9.9 ng/μL in 2007 to 3.8 ng/μL in 2008. Further next-generation sequencing analysis of our extracted DNA will be performed to determine the composition of these microbes. We will also model the meteorological and chemical factors that are good predictors for microbial composition for our samples over time and space.

  11. A Miniaturized Dual-Mode Bandpass Filter Using Slot Spurline Technique

    Directory of Open Access Journals (Sweden)

    Haiwen Liu

    2013-01-01

    Full Text Available A miniaturized dual-mode bandpass filter (BPF with elliptic function response using slot spurline is designed in this paper. The slot spurline can not only splits the degenerate modes but also determine the type of filter characteristic (Chebyshev or elliptic. To miniaturize the resonator, four sagittate stubs are proposed. For demonstration purpose, a BPF operating at 5.75 GHz for WLAN application was designed, fabricated, and measured. The measured results are in good agreement with the full-wave simulation results.

  12. Resonance Damping Techniques for Grid-Connected Voltage Source Converters with LCL filters – A Review

    DEFF Research Database (Denmark)

    Zhang, Chi; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2014-01-01

    LCL filters play an important role in grid-connected converters when trying to reduce switching-frequency ripple currents injected into the grid. Besides, their small size and low cost make them attractive for many practical applications. However, the LCL filter is a third-order system, which...... presents a resonance peak frequency. Oscillation will occur in the control loop in high frequency ranges, especially in current loop in double-loops controlled converters. In order to solve this, many strategies have been proposed to damp resonance, including passive and active methods. This paper makes...

  13. Determination of boron in aqueous solutions by solid state nuclear track detectors technique, using a filtered neutron beam

    International Nuclear Information System (INIS)

    Moraes, M.A.P.V. de; Pugliesi, R.; Khouri, M.T.F.C.

    1985-11-01

    The solid state nuclear track detectors technique has been used for determination of boron in aqueous solutions, using a filtered neutron beam. The particles tracks from the 10 B(n,α)Li 7 reaction were registered in the CR-39 film, chemically etched in a (30%) KOH solution 70 0 C during 90 minutes. The obtained results showed the usefulness of this technique for boron determination in the ppm range. The inferior detectable limit was 9 ppm. The combined track registration efficiency factor K has been evaluated in the solutions, for the CR-39 detector and its values is K= (4,60 - + 0,06). 10 -4 cm. (Author) [pt

  14. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  15. Introducer curving technique for the prevention of tilting of transfemoral Günther Tulip inferior vena cava filter.

    Science.gov (United States)

    Xiao, Liang; Huang, De-sheng; Shen, Jing; Tong, Jia-jie

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10°) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, χ(2) = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, χ(2) = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  16. Filter unit

    International Nuclear Information System (INIS)

    Shiba, Kazuo; Nagao, Koji; Akiyama, Toshio; Tanaka, Fumikazu; Osumi, Akira; Hirao, Yasuhiro.

    1997-01-01

    The filter unit is used by attaching to a dustproof mask, and used in a radiation controlled area such as in a nuclear power plant. The filter unit comprises sheet-like front and back filtering members disposed vertically in parallel, a spacer for keeping the filtering members to a predetermined distance and front and back covering members for covering the two filtering members respectively. An electrostatic filter prepared by applying resin-fabrication to a base sheet comprising 100% by weight of organic fibers as fiber components, for example, wool felt, synthetic fiber non-woven fabric, wool and synthetic fiber blend non-woven fabric and then electrifying the resin is used for the filtering members. Then, residue of ashes can be eliminated substantially or completely after burning them. (I.N.)

  17. Spectral Matrix Filtering Applied to Vsp Processing Application du filtrage matriciel au traitement des profils sismiques verticaux

    Directory of Open Access Journals (Sweden)

    Glangeaud F.

    2006-11-01

    Full Text Available The spectral matrix computed from VSP-traces transfer functions contains information about each wave making up the VSP data set. Using a filter based on the eigenvectors of the spectral matrix leads to a decomposition of input traces in eigensections. The eigensections associated with the largest eigenvalues contain the contribution of the correlated seismic events. Signal space is denoted as the sum of these eigensections. Other eigensections represent noise. When the different waves making up the VSP have very different amplitudes, decomposition of input traces into eigensections leads to wave separation without any required knowledge about the apparent velocities of the waves. Limitations of wave separation by the multichannel filtering are a function of the scalar product values of the waves (in frequency domain and of the relative wave amplitudes. The spectral matrix filtering can always be used to enhance signal-to-noise ratio on VSP data. The eigenvalues of the spectral matrix can be used to estimate the signal-to-noise ratio as a function of frequency. It is possible to qualify the behavior of a VSP tool in a well and to detect some resonant frequencies probably generated by poor coupling. Field data examples are shown. The first example shows data recorded in a vertical well whose converted shear waves are separated from upgoing and downgoing compressional waves using a spectral matrix filter. This field case shows the efficiency of the spectral matrix filter in extracting weak events. The second example shows data recorded in a highly deviated well, where very close apparent velocity events are successfully separated by use of spectral matrix filtering. La technique de filtrage matriciel, quel que soit le type de données auxquelles elle est appliquée, permet d'améliorer le rapport signal sur bruit, de quantifier l'évolution du rapport signal sur bruit en fonction de la fréquence, d'identifier les différents signaux composant les

  18. Evaluation of exome filtering techniques for the analysis of clinically relevant genes.

    Science.gov (United States)

    Kernohan, Kristin D; Hartley, Taila; Alirezaie, Najmeh; Robinson, Peter N; Dyment, David A; Boycott, Kym M

    2018-02-01

    A significant challenge facing clinical translation of exome sequencing is meaningful and efficient variant interpretation. Each exome contains ∼500 rare coding variants; laboratories must systematically and efficiently identify which variant(s) contribute to the patient's phenotype. In silico filtering is an approach that reduces analysis time while decreasing the chances of incidental findings. We retrospectively assessed 55 solved exomes using available datasets as in silico filters: Online Mendelian Inheritance in Man (OMIM), Orphanet, Human Phenotype Ontology (HPO), and Radboudumc University Medical Center curated panels. We found that personalized panels produced using HPO terms for each patient had the highest success rate (100%), while producing considerably less variants to assess. HPO panels also captured multiple diagnoses in the same individual. We conclude that custom HPO-derived panels are an efficient and effective way to identify clinically relevant exome variants. © 2017 Wiley Periodicals, Inc.

  19. An auxiliary adaptive Gaussian mixture filter applied to flowrate allocation using real data from a multiphase producer

    Science.gov (United States)

    Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal

    2017-05-01

    Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.

  20. A Technique for Controlling Matric Suction on Filter Papers . GroWth ...

    African Journals Online (AJOL)

    tion (Figures 3 and 4). Water uptake was 17.9,. 8.4 and 6.3 % for CSH-9; 26.6, 18 and 15 % for ICSV-112 respectively at a matric suction of 0, 0.1 and 10 kPa. •. ; ." F"'. " :, ~ 1 t. ' . .'.' , ': ... : :Pi,s~U~sion. , ", ;'. ,:'~ T~esti;t ofge~ination was significantly deiayed by'.an i~cre~se in filter paper matric suction for all cultivars for both ...

  1. Characterization of Airborne Particles Collected from Car Engine Air Filters Using SEM and EDX Techniques

    Science.gov (United States)

    Heredia Rivera, Birmania; Gerardo Rodriguez, Martín

    2016-01-01

    Particulate matter accumulated on car engine air-filters (CAFs) was examined in order to investigate the potential use of these devices as efficient samplers for collecting street level air that people are exposed to. The morphology, microstructure, and chemical composition of a variety of particles were studied using scanning electron microscopy (SEM) and energy-dispersive X-ray (EDX). The particulate matter accumulated by the CAFs was studied in two categories; the first was of removed particles by friction, and the second consisted of particles retained on the filters. Larger particles with a diameter of 74–10 µm were observed in the first category. In the second one, the detected particles had a diameter between 16 and 0.7 µm. These particles exhibited different morphologies and composition, indicating mostly a soil origin. The elemental composition revealed the presence of three groups: mineral (clay and asphalt), metallic (mainly Fe), and biological particles (vegetal and animal debris). The palynological analysis showed the presence of pollen grains associated with urban plants. These results suggest that CAFs capture a mixture of atmospheric particles, which can be analyzed in order to monitor urban air. Thus, the continuous availability of large numbers of filters and the retroactivity associated to the car routes suggest that these CAFs are very useful for studying the high traffic zones within a city. PMID:27706087

  2. Characterization of Airborne Particles Collected from Car Engine Air Filters Using SEM and EDX Techniques

    Directory of Open Access Journals (Sweden)

    Birmania Heredia Rivera

    2016-10-01

    Full Text Available Particulate matter accumulated on car engine air-filters (CAFs was examined in order to investigate the potential use of these devices as efficient samplers for collecting street level air that people are exposed to. The morphology, microstructure, and chemical composition of a variety of particles were studied using scanning electron microscopy (SEM and energy-dispersive X-ray (EDX. The particulate matter accumulated by the CAFs was studied in two categories; the first was of removed particles by friction, and the second consisted of particles retained on the filters. Larger particles with a diameter of 74–10 µm were observed in the first category. In the second one, the detected particles had a diameter between 16 and 0.7 µm. These particles exhibited different morphologies and composition, indicating mostly a soil origin. The elemental composition revealed the presence of three groups: mineral (clay and asphalt, metallic (mainly Fe, and biological particles (vegetal and animal debris. The palynological analysis showed the presence of pollen grains associated with urban plants. These results suggest that CAFs capture a mixture of atmospheric particles, which can be analyzed in order to monitor urban air. Thus, the continuous availability of large numbers of filters and the retroactivity associated to the car routes suggest that these CAFs are very useful for studying the high traffic zones within a city.

  3. Applying Communication Theories toward Designing Compliance-Gaining Techniques in Customer Dissatisfaction

    Directory of Open Access Journals (Sweden)

    Jonathan Matusitz

    2011-01-01

    Full Text Available The purpose of this paper is to apply three communication theories (namely, Argumentation Theory, the Foot-in-the-Door Technique, and the Door-in-the-Face Technique to the formulation of complaints that communicate effectively to company employees and yield compensation for the consumer. What the authors demonstrate is that complaining is not a haphazard procedure if communication theories are applied properly. In addition, also emphasized is the importance of self-efficacy, as a psychological component, to illustrate the necessity for complainers to have sufficient and trueself-confidence in order to carry out each of these theories in practice.

  4. Database 'catalogue of techniques applied to materials and products of nuclear engineering'

    International Nuclear Information System (INIS)

    Lebedeva, E.E.; Golovanov, V.N.; Podkopayeva, I.A.; Temnoyeva, T.A.

    2002-01-01

    The database 'Catalogue of techniques applied to materials and products of nuclear engineering' (IS MERI) was developed to provide informational support for SSC RF RIAR and other enterprises in scientific investigations. This database contains information on the techniques used at RF Minatom enterprises for reactor material properties investigation. The main purpose of this system consists in the assessment of the current status of the reactor material science experimental base for the further planning of experimental activities and methodical support improvement. (author)

  5. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  6. Biochar filters reduced the toxic effects of nickel on tomato (Lycopersicon esculentum L.) grown in nutrient film technique hydroponic system.

    Science.gov (United States)

    Mosa, Ahmed; El-Banna, Mostafa F; Gao, Bin

    2016-04-01

    This work used the nutrient film technique to evaluate the role of biochar filtration in reducing the toxic effects of nickel (Ni(2+)) on tomato growth. Three hydroponic treatments: T1 (control), T2 (with Ni(2+)), and T3 (with Ni(2+) and biochar) were used in the experiments. Scanning electron microscopy equipped with energy dispersive X-ray spectroscopy and Fourier transform spectroscopy was used to characterize the pre- and post-treatment biochar samples. The results illustrated that precipitation, ion exchange, and complexation with surface functional groups were the potential mechanisms of Ni(2+) removal by biochar. In comparison to the control, the T2 treatment showed severe Ni-stress with alterations in cell wall structure, distortions in cell nucleus, disturbances in mitochondrial system, malformations in stomatal structure, and abnormalities in chloroplast structure. The biochar filters in T3 treatment reduced dysfunctions of cell organelles in root and shoot cells. Total chlorophyll concentration decreased by 41.6% in T2 treatment. This reduction, however, was only 20.8% due to the protective effect of the biochar filters. The presence of Ni(2+) in the systems reduced the tomato fruit yield 58.5% and 31.9% in T2 and T3, respectively. Nickel concentrations reached the toxic limit in roots, shoots, and fruits in T2, which were not observed in T3. Biochar filters in T3 also minimized the dramatic reductions in nutrients concentration in roots, shoots, and fruits, which occurred in T2 treatment due to the severe Ni-stress. Findings from this work suggested that biochar filters can be used on farms as a safeguard for wastewater irrigation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Strategies and techniques of communication and public relations applied to non-profit sector

    Directory of Open Access Journals (Sweden)

    Ioana – Julieta Josan

    2010-05-01

    Full Text Available The aim of this paper is to summarize the strategies and techniques of communication and public relations applied to non-profit sector.The approach of the paper is to identify the most appropriate strategies and techniques that non-profit sector can use to accomplish its objectives, to highlight specific differences between the strategies and techniques of the profit and non-profit sectors and to identify potential communication and public relations actions in order to increase visibility among target audience, create brand awareness and to change into positive brand sentiment the target perception about the non-profit sector.

  8. Using the filter paper bridge technique for initiation of vitrocultures at maize

    Directory of Open Access Journals (Sweden)

    Cristian Felix BLIDAR

    2012-05-01

    Full Text Available Alongside wheat, maize is one of the most important species of cereals used for food and feed as well as in the bioethanol industry. As a result of this fact, maize is today the spotlight of many researchers, constantly trying to increase the productivity of this species, particularly important i from the economic point of view. The main aim of this article is to investigate the efficiency of the Blidar type filter-paper bridges (BFPB in initiating the maize in vitro cultures for liquid culture media, in comparison with the conventional agarized culture media – solid culture media. In these experiments a modified Murashige-Skoog culture media (1962 (free of AIA and amino acids supplemented or not with agar, were used. The inocula consisted in caryopsis of Zea mays L. (hybrid Kiskun 4255 and based on the results of these experiments it can be underlined that growth increases for the cultivated vitroplants on liquid culture media provided with filter-paper bridges compared with those conventianlly cultivated on an agarized culture media, as following 5.34% for dry weight and 356.09% for leafs length.

  9. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... though extant literature has shown the importance of formal modelling techniques, the impact of utilising these techniques remains relatively unknown. Therefore, this article studies three main areas: (1) the impact of using modelling techniques based on Unified Modelling Language (UML), in which...... ability to reduce the number of product variants. This paper contributes to an increased understanding of what companies can gain from using more formalised modelling techniques in configurator projects, and under what circumstances they should be used....

  10. Ceramic fiber reinforced filter

    Science.gov (United States)

    Stinton, David P.; McLaughlin, Jerry C.; Lowden, Richard A.

    1991-01-01

    A filter for removing particulate matter from high temperature flowing fluids, and in particular gases, that is reinforced with ceramic fibers. The filter has a ceramic base fiber material in the form of a fabric, felt, paper of the like, with the refractory fibers thereof coated with a thin layer of a protective and bonding refractory applied by chemical vapor deposition techniques. This coating causes each fiber to be physically joined to adjoining fibers so as to prevent movement of the fibers during use and to increase the strength and toughness of the composite filter. Further, the coating can be selected to minimize any reactions between the constituents of the fluids and the fibers. A description is given of the formation of a composite filter using a felt preform of commercial silicon carbide fibers together with the coating of these fibers with pure silicon carbide. Filter efficiency approaching 100% has been demonstrated with these filters. The fiber base material is alternately made from aluminosilicate fibers, zirconia fibers and alumina fibers. Coating with Al.sub.2 O.sub.3 is also described. Advanced configurations for the composite filter are suggested.

  11. Testing the Feasibility of Using PERM to Apply Scattering-Angle Filtering in the Image-Domain for FWI Applications

    KAUST Repository

    Alzahrani, Hani Ataiq

    2014-09-01

    ABSTRACT Testing the Feasibility of Using PERM to Apply Scattering-Angle Filtering in the Image-Domain for FWI Applications Hani Ataiq Alzahrani Full Waveform Inversion (FWI) is a non-linear optimization problem aimed to estimating subsurface parameters by minimizing the mis t between modeled and recorded seismic data using gradient descent methods, which are the only practical choice because of the size of the problem. Due to the high non-linearity of the problem, gradient methods will converge to a local minimum if the starting model is not close to the true one. The accuracy of the long-wavelength components of the initial model controls the level of non-linearity of the inversion. In order for FWI to converge to the global minimum, we have to obtain the long wavelength components of the model before inverting for the short wavelengths. Ultra-low temporal frequencies are sensitive to the smooth (long wavelength) part of the model, and can be utilized by waveform inversion to resolve that part. Un- fortunately, frequencies in this range are normally missing in eld data due to data- acquisition limitations. The lack of low frequencies can be compensated for by uti- lizing wide-aperture data, as they include arrivals that are especially sensitive to the long wavelength components of the model. The higher the scattering angle of a 5 recorded event, the higher the model wavelength it can resolve. Based on this prop- erty, a scattering-angle ltering algorithm is proposed to start the inversion process with events corresponding to the highest scattering angle available in the data, and then include lower scattering angles progressively. The large scattering angles will resolve the smooth part of the model and reduce the non-linearity of the problem, then the lower ones will enhance the resolution of the model. Recorded data is rst migrated using Pre-stack Exploding Re ector Migration (PERM), then the resulting pre-stack image is transformed into angle gathers to which

  12. Recommendations for learners are different: Applying memory-based recommender system techniques to lifelong learning

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2007). Recommendations for learners are different: applying memory-based recommender system techniques to lifelong learning. Paper presented at the SIRTEL workshop at the EC-TEL 2007 Conference. September, 17-20, 2007, Crete, Greece.

  13. English Language Teachers' Perceptions on Knowing and Applying Contemporary Language Teaching Techniques

    Science.gov (United States)

    Sucuoglu, Esen

    2017-01-01

    The aim of this study is to determine the perceptions of English language teachers teaching at a preparatory school in relation to their knowing and applying contemporary language teaching techniques in their lessons. An investigation was conducted of 21 English language teachers at a preparatory school in North Cyprus. The SPSS statistical…

  14. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  15. Statistical Techniques Used in Three Applied Linguistics Journals: "Language Learning,""Applied Linguistics" and "TESOL Quarterly," 1980-1986: Implications for Readers and Researchers.

    Science.gov (United States)

    Teleni, Vicki; Baldauf, Richard B., Jr.

    A study investigated the statistical techniques used by applied linguists and reported in three journals, "Language Learning,""Applied Linguistics," and "TESOL Quarterly," between 1980 and 1986. It was found that 47% of the published articles used statistical procedures. In these articles, 63% of the techniques used could be called basic, 28%…

  16. Investigation of the shear bond strength to dentin of universal adhesives applied with two different techniques

    Directory of Open Access Journals (Sweden)

    Elif Yaşa

    2017-09-01

    Full Text Available Objective: The aim of this study was to evaluate the shear bond strength of universal adhesives applied with self-etch and etch&rinse techniques to dentin. Materials and Method: Fourty-eight sound extracted human third molars were used in this study. Occlusal enamel was removed in order to expose the dentinal surface, and the surface was flattened. Specimens were randomly divided into four groups and were sectioned vestibulo-lingually using a diamond disc. The universal adhesives: All Bond Universal (Group 1a and 1b, Gluma Bond Universal (Group 2a and 2b and Single Bond Universal (Group 3a and 3b were applied onto the tooth specimens either with self-etch technique (a or with etch&rinse technique (b according to the manufacturers’ instructions. Clearfil SE Bond (Group 4a; self-etch and Optibond FL (Group 4b; etch&rinse were used as control groups. Then the specimens were restored with a nanohybrid composite resin (Filtek Z550. After thermocycling, shear bond strength test was performed with a universal test machine at a crosshead speed of 0.5 mm/min. Fracture analysis was done under a stereomicroscope (×40 magnification. Data were analyzed using two-way ANOVA and post-hoc Tukey tests. Results: Statistical analysis showed significant differences in shear bond strength values between the universal adhesives (p<0.05. Significantly higher bond strength values were observed in self-etch groups (a in comparison to etch&rinse groups (b (p<0.05. Among all groups, Single Bond Universal showed the greatest shear bond strength values, whereas All Bond Universal showed the lowest shear bond strength values with both application techniques. Conclusion: Dentin bonding strengths of universal adhesives applied with different techniques may vary depending on the adhesive material. For the universal bonding agents tested in this study, the etch&rinse technique negatively affected the bond strength to dentin.

  17. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  18. Renormalization techniques applied to the study of density of states in disordered systems

    International Nuclear Information System (INIS)

    Ramirez Ibanez, J.

    1985-01-01

    A general scheme for real space renormalization of formal scattering theory is presented and applied to the calculation of density of states (DOS) in some finite width systems. This technique is extended in a self-consistent way, to the treatment of disordered and partially ordered chains. Numerical results of moments and DOS are presented in comparison with previous calculations. In addition, a self-consistent theory for the magnetic order problem in a Hubbard chain is derived and a parametric transition is observed. Properties of localization of the electronic states in disordered chains are studied through various decimation averaging techniques and using numerical simulations. (author) [pt

  19. Speech enhancement with multichannel Wiener filter techniques in multimicrophone binaural hearing aids.

    Science.gov (United States)

    Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc

    2009-01-01

    This paper evaluates speech enhancement in binaural multimicrophone hearing aids by noise reduction algorithms based on the multichannel Wiener filter (MWF) and the MWF with partial noise estimate (MWF-N). Both algorithms are specifically developed to combine noise reduction with the preservation of binaural cues. Objective and perceptual evaluations were performed with different speech-in-multitalker-babble configurations in two different acoustic environments. The main conclusions are as follows: (a) A bilateral MWF with perfect voice activity detection equals or outperforms a bilateral adaptive directional microphone in terms of speech enhancement while preserving the binaural cues of the speech component. (b) A significant gain in speech enhancement is found when transmitting one contralateral microphone signal to the MWF active at the ipsilateral hearing aid. Adding a second contralateral microphone showed a significant improvement during the objective evaluations but not in the subset of scenarios tested during the perceptual evaluations. (c) Adding the partial noise estimate to the MWF, done to improve the spatial awareness of the hearing aid user, reduces the amount of speech enhancement in a limited way. In some conditions the MWF-N even outperformed the MWF possibly due to an improved spatial release from masking.

  20. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  1. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  2. Best Available Technique (BAT) assessment applied to ACR-1000 waste and heavy water management systems

    International Nuclear Information System (INIS)

    Sachar, M.; Julien, S.; Hau, K.

    2010-01-01

    The ACR-1000 design is the next evolution of the proven CANDU reactor design. One of the key objectives for this project was to systematically apply the As Low As Reasonably Achievable (ALARA) principle to the reactor design. The ACR design team selected the Best Available Technique (BAT) assessment for this purpose to document decisions made during the design of each ACR-1000 waste and heavy water management systems. This paper describes the steps in the BAT assessment that has been applied to the ACR-1000 design. (author)

  3. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Science.gov (United States)

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  4. An heuristic filtering tool to identify phenotype-associated genetic variants applied to human intellectual disability and canine coat colors.

    Science.gov (United States)

    Broeckx, Bart J G; Coopman, Frank; Verhoeven, Geert; Bosmans, Tim; Gielen, Ingrid; Dingemanse, Walter; Saunders, Jimmy H; Deforce, Dieter; Van Nieuwerburgh, Filip

    2015-11-19

    Identification of one or several disease causing variant(s) from the large collection of variants present in an individual is often achieved by the sequential use of heuristic filters. The recent development of whole exome sequencing enrichment designs for several non-model species created the need for a species-independent, fast and versatile analysis tool, capable of tackling a wide variety of standard and more complex inheritance models. With this aim, we developed "Mendelian", an R-package that can be used for heuristic variant filtering. The R-package Mendelian offers fast and convenient filters to analyze putative variants for both recessive and dominant models of inheritance, with variable degrees of penetrance and detectance. Analysis of trios is supported. Filtering against variant databases and annotation of variants is also included. This package is not species specific and supports parallel computation. We validated this package by reanalyzing data from a whole exome sequencing experiment on intellectual disability in humans. In a second example, we identified the mutations responsible for coat color in the dog. This is the first example of whole exome sequencing without prior mapping in the dog. We developed an R-package that enables the identification of disease-causing variants from the long list of variants called in sequencing experiments. The software and a detailed manual are available at https://github.com/BartBroeckx/Mendelian.

  5. Just-in-Time techniques as applied to hazardous materials management

    OpenAIRE

    Spicer, John S.

    1996-01-01

    Approved for public release; distribution is unlimited This study investigates the feasibility of integrating JIT techniques in the context of hazardous materials management. This study provides a description of JIT, a description of environmental compliance issues and the outgrowth of related HAZMAT policies, and a broad perspective on strategies for applying JIT to HAZMAT management. http://archive.org/details/justintimetechn00spic Lieutenant Commander, United States Navy

  6. Compact Liquid Crystal Based Tunable Band-Stop Filter with an Ultra-Wide Stopband by Using Wave Interference Technique

    Directory of Open Access Journals (Sweden)

    Longzhu Cai

    2017-01-01

    Full Text Available A wave interference filtering section that consists of three stubs of different lengths, each with an individual stopband of its own central frequency, is reported here for the design of band-stop filters (BSFs with ultra-wide and sharp stopbands as well as large attenuation characteristics. The superposition of the individual stopbands provides the coverage over an ultra-wide frequency range. Equations and guidelines are presented for the application of a new wave interference technique to adjust the rejection level and width of its stopband. Based on that, an electrically tunable ultra-wide stopband BSF using a liquid crystal (LC material for ultra-wideband (UWB applications is designed. Careful treatment of the bent stubs, including impedance matching of the main microstrip line and bent stubs together with that of the SMA connectors and impedance adaptors, was carried out for the compactness and minimum insertion and reflection losses. The experimental results of the fabricated device agree very well with that of the simulation. The centre rejection frequency as measured can be tuned between 4.434 and 4.814 GHz when a biased voltage of 0–20 Vrms is used. The 3 dB and 25 dB stopband bandwidths were 4.86 GHz and 2.51 GHz, respectively, which are larger than that of other recently reported LC based tunable BSFs.

  7. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    International Nuclear Information System (INIS)

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-01-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  8. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-07-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  9. Treatment of breast cancer with simultaneous integrated boost in hybrid plan technique. Influence of flattening filter-free beams

    Energy Technology Data Exchange (ETDEWEB)

    Bahrainy, Marzieh; Kretschmer, Matthias; Joest, Vincent; Kasch, Astrid; Wuerschmidt, Florian; Dahle, Joerg; Lorenzen, Joern [Radiologische Allianz, Hamburg (Germany)

    2016-05-15

    The present study compares in silico treatment plans using hybrid plan technique during hypofractionated radiation of mammary carcinoma with simultaneous integrated boost (SIB). The influence of 6 MV photon radiation in flattening filter free (FFF) mode against the clinical standard flattening filter (FF) mode is to be examined. RT planning took place with FF and FFF radiation plans for 10 left-sided breast cancer patients. Hybrid plans were realised with two tangential IMRT fields and one VMAT field. The dose prescription was in line with the guidelines in the ARO-2010-01 study. The dosimetric verification took place with a manufacturer-independent measurement system. Required dose prescriptions for the planning target volumes (PTV) were achieved for both groups. The average dose values of the ipsi- and contralateral lung and the heart did not differ significantly. The overall average incidental dose to the left anterior descending artery (LAD) of 8.24 ± 3.9 Gy in the FFF group and 9.05 ± 3.7 Gy in the FF group (p < 0.05) were found. The dosimetric verifications corresponded to the clinical requirements. FFF-based RT plans reduced the average treatment time by 17 s/fraction. In comparison to the FF-based hybrid plan technique the FFF mode allows further reduction of the average LAD dose for comparable target volume coverage without adverse low-dose exposure of contralateral structures. The combination of hybrid plan technique and 6 MV photon radiation in the FFF mode is suitable for use with hypofractionated dose schemes. The increased dose rate allows a substantial reduction of treatment time and thus beneficial application of the deep inspiration breath hold technique. (orig.) [German] Vergleich der ''In-silico''-Bestrahlungsplaene der klinisch etablierten Hybridplan-Technik bei hypofraktionierter Bestrahlung des Mammakarzinoms mit simultan integriertem Boost (SIB). Untersucht wird der Einfluss von 6MV-Photonenstrahlung im Flattening-Filter

  10. Comparison between different techniques applied to quartz CPO determination in granitoid mylonites

    Science.gov (United States)

    Fazio, Eugenio; Punturo, Rosalda; Cirrincione, Rosolino; Kern, Hartmut; Wenk, Hans-Rudolph; Pezzino, Antonino; Goswami, Shalini; Mamtani, Manish

    2016-04-01

    Since the second half of the last century, several techniques have been adopted to resolve the crystallographic preferred orientation (CPO) of major minerals constituting crustal and mantle rocks. To this aim, many efforts have been made to increase the accuracy of such analytical devices as well as to progressively reduce the time needed to perform microstructural analysis. It is worth noting that many of these microstructural studies deal with quartz CPO because of the wide occurrence of this mineral phase in crustal rocks as well as its quite simple chemical composition. In the present work, four different techniques were applied to define CPOs of dynamically recrystallized quartz domains from naturally deformed rocks collected from a ductile crustal scale shear zone in order to compare their advantages and limitation. The selected Alpine shear zone is located in the Aspromonte Massif (Calabrian Peloritani Orogen, southern Italy) representing granitoid lithotypes. The adopted methods span from "classical" universal stage (US), to image analysis technique (CIP), electron back-scattered diffraction (EBSD), and time of flight neutron diffraction (TOF). When compared, bulk texture pole figures obtained by means of these different techniques show a good correlation. Advances in analytical techniques used for microstructural investigations are outlined by discussing results of quartz CPO that are presented in this study.

  11. Wire-mesh and ultrasound techniques applied for the characterization of gas-liquid slug flow

    Energy Technology Data Exchange (ETDEWEB)

    Ofuchi, Cesar Y.; Sieczkowski, Wytila Chagas; Neves Junior, Flavio; Arruda, Lucia V.R.; Morales, Rigoberto E.M.; Amaral, Carlos E.F.; Silva, Marco J. da [Federal University of Technology of Parana, Curitiba, PR (Brazil)], e-mails: ofuchi@utfpr.edu.br, wytila@utfpr.edu.br, neves@utfpr.edu.br, lvrarruda@utfpr.edu.br, rmorales@utfpr.edu.br, camaral@utfpr.edu.br, mdasilva@utfpr.edu.br

    2010-07-01

    Gas-liquid two-phase flows are found in a broad range of industrial applications, such as chemical, petrochemical and nuclear industries and quite often determine the efficiency and safety of process and plants. Several experimental techniques have been proposed and applied to measure and quantify two-phase flows so far. In this experimental study the wire-mesh sensor and an ultrasound technique are used and comparatively evaluated to study two-phase slug flows in horizontal pipes. The wire-mesh is an imaging technique and thus appropriated for scientific studies while ultrasound-based technique is robust and non-intrusive and hence well suited for industrial applications. Based on the measured raw data it is possible to extract some specific slug flow parameters of interest such as mean void fraction and characteristic frequency. The experiments were performed in the Thermal Sciences Laboratory (LACIT) at UTFPR, Brazil, in which an experimental two-phase flow loop is available. The experimental flow loop comprises a horizontal acrylic pipe of 26 mm diameter and 9 m length. Water and air were used to produce the two phase flow under controlled conditions. The results show good agreement between the techniques. (author)

  12. Machine learning techniques applied to the determination of road suitability for the transportation of dangerous substances.

    Science.gov (United States)

    Matías, J M; Taboada, J; Ordóñez, C; Nieto, P G

    2007-08-17

    This article describes a methodology to model the degree of remedial action required to make short stretches of a roadway suitable for dangerous goods transport (DGT), particularly pollutant substances, using different variables associated with the characteristics of each segment. Thirty-one factors determining the impact of an accident on a particular stretch of road were identified and subdivided into two major groups: accident probability factors and accident severity factors. Given the number of factors determining the state of a particular road segment, the only viable statistical methods for implementing the model were machine learning techniques, such as multilayer perceptron networks (MLPs), classification trees (CARTs) and support vector machines (SVMs). The results produced by these techniques on a test sample were more favourable than those produced by traditional discriminant analysis, irrespective of whether dimensionality reduction techniques were applied. The best results were obtained using SVMs specifically adapted to ordinal data. This technique takes advantage of the ordinal information contained in the data without penalising the computational load. Furthermore, the technique permits the estimation of the utility function that is latent in expert knowledge.

  13. Super-ensemble techniques applied to wave forecast: performance and limitations

    Directory of Open Access Journals (Sweden)

    F. Lenartz

    2010-06-01

    Full Text Available Nowadays, several operational ocean wave forecasts are available for a same region. These predictions may considerably differ, and to choose the best one is generally a difficult task. The super-ensemble approach, which consists in merging different forecasts and past observations into a single multi-model prediction system, is evaluated in this study. During the DART06 campaigns organized by the NATO Undersea Research Centre, four wave forecasting systems were simultaneously run in the Adriatic Sea, and significant wave height was measured at six stations as well as along the tracks of two remote sensors. This effort provided the necessary data set to compare the skills of various multi-model combination techniques. Our results indicate that a super-ensemble based on the Kalman Filter improves the forecast skills: The bias during both the hindcast and forecast periods is reduced, and the correlation coefficient is similar to that of the best individual model. The spatial extrapolation of local results is not straightforward and requires further investigation to be properly implemented.

  14. Leukodepletion blood filters: filter design and mechanisms of leukocyte removal.

    Science.gov (United States)

    Dzik, S

    1993-04-01

    Modern leukocyte removal filters have been developed after years of refinement in design. Current filters are composite filters in which synthetic microfiber material is prepared as a nonwoven web. The filter material may be surface modified to alter surface tension or charge to improve performance. The housing design promotes effective contact of blood with the filter material and decreases shear forces. The exact mechanisms by which these filters remove leukocytes from blood components are uncertain, but likely represent a combination of both physical and biological processes whose contributions to leukocyte removal are interdependent. Small-pore microfiber webs result in barrier phenomena that permit retention of individual cells and increase the total adsorptive area of the filter. Modifications in surface charge can increase or decrease cell attraction to the fibers. Optimum interfacial surface tensions between blood cells, plasma, and filter fibers not only permit effective blood flow through small fiber pores, but also facilitate cell contact with the material. Barrier retention is a common mechanism for all modern leukocyte-removal filters and applies to all leukocyte subtypes. Because barrier retention does not depend on cell viability, it is operative for cells of any age and will retain any nondeformable cell, including whole nuclei from lymphocytes or monocytes. Barrier retention is supplemented by retention by adhesion. RBCs, lymphocytes, monocytes, granulocytes, and platelets differ in their relative adhesiveness to filter fibers. Different adhesive mechanisms are used in filters designed for RBCs compared with filters designed for platelets. Although lymphocytes, monocytes, and granulocytes can adhere directly to filter fibers, the biological mechanisms underlying cell adhesion may differ for these cell types. These differences may depend on expression of cell adhesion molecules. In the case of filtration of fresh RBCs, platelet-leukocyte interaction

  15. Nuclear analytical techniques applied to the large scale measurements of atmospheric aerosols in the amazon region

    International Nuclear Information System (INIS)

    Gerab, Fabio

    1996-03-01

    This work presents the characterization of the atmosphere aerosol collected in different places of the Amazon Basin. We studied both the biogenic emission from the forest and the particulate material which is emitted to the atmosphere due to the large scale man-made burning during the dry season. The samples were collected during a three year period at two different locations in the Amazon, namely the Alta Floresta (MT) and Serra do Navio (AP) regions, using stacked unit filters. These regions represent two different atmospheric compositions: the aerosol is dominated by the forest natural biogenic emission at Serra do Navio, while at Alta Floresta it presents an important contribution from the man-made burning during the dry season. At Alta Floresta we took samples in gold in order to characterize mercury emission to the atmosphere related to the gold prospection activity in Amazon. Airplanes were used for aerosol sampling during the 1992 and 1993 dry seasons to characterize the atmospheric aerosol contents from man-made burning in large Amazonian areas. The samples were analyzed using several nuclear analytic techniques: Particle Induced X-ray Emission for the quantitative analysis of trace elements with atomic number above 11; Particle Induced Gamma-ray Emission for the quantitative analysis of Na; and Proton Microprobe was used for the characterization of individual particles of the aerosol. Reflectancy technique was used in the black carbon quantification, gravimetric analysis to determine the total atmospheric aerosol concentration and Cold Vapor Atomic Absorption Spectroscopy for quantitative analysis of mercury in the particulate from the Alta Floresta gold shops. Ionic chromatography was used to quantify ionic contents of aerosols from the fine mode particulate samples from Serra do Navio. Multivariate statistical analysis was used in order to identify and characterize the sources of the atmospheric aerosol present in the sampled regions. (author)

  16. District-level hospital trauma care audit filters: Delphi technique for defining context-appropriate indicators for quality improvement initiative evaluation in developing countries.

    Science.gov (United States)

    Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles

    2016-01-01

    Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Consensus on trauma care audit filters was built between twenty panellists using a Delphi technique with four anonymous, iterative surveys designed to elicit: (i) trauma care processes to be measured; (ii) important features of audit filters for the district-level hospital setting; and (iii) potentially useful filters. Filters were ranked on a scale from 0 to 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Panellists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1--0.58; Round 2--0.66; Round 3--0.76; and Round 4--0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage--vital signs are recorded within 15 min of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation--a large bore IV was placed within 15 min of patient arrival; referral--if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs

  17. The correlated k-distribution technique as applied to the AVHRR channels

    Science.gov (United States)

    Kratz, David P.

    1995-01-01

    Correlated k-distributions have been created to account for the molecular absorption found in the spectral ranges of the five Advanced Very High Resolution Radiometer (AVHRR) satellite channels. The production of the k-distributions was based upon an exponential-sum fitting of transmissions (ESFT) technique which was applied to reference line-by-line absorptance calculations. To account for the overlap of spectral features from different molecular species, the present routines made use of the multiplication transmissivity property which allows for considerable flexibility, especially when altering relative mixing ratios of the various molecular species. To determine the accuracy of the correlated k-distribution technique as compared to the line-by-line procedure, atmospheric flux and heating rate calculations were run for a wide variety of atmospheric conditions. For the atmospheric conditions taken into consideration, the correlated k-distribution technique has yielded results within about 0.5% for both the cases where the satellite spectral response functions were applied and where they were not. The correlated k-distribution's principal advantages is that it can be incorporated directly into multiple scattering routines that consider scattering as well as absorption by clouds and aerosol particles.

  18. Efficient combination of acceleration techniques applied to high frequency methods for solving radiation and scattering problems

    Science.gov (United States)

    Lozano, Lorena; Algar, Ma Jesús; García, Eliseo; González, Iván; Cátedra, Felipe

    2017-12-01

    An improved ray-tracing method applied to high-frequency techniques such as the Uniform Theory of Diffraction (UTD) is presented. The main goal is to increase the speed of the analysis of complex structures while considering a vast number of observation directions and taking into account multiple bounces. The method is based on a combination of the Angular Z-Buffer (AZB), the Space Volumetric Partitioning (SVP) algorithm and the A∗ heuristic search method to treat multiple bounces. In addition, a Master Point strategy was developed to analyze efficiently a large number of Near-Field points or Far-Field directions. This technique can be applied to electromagnetic radiation problems, scattering analysis, propagation at urban or indoor environments and to the mutual coupling between antennas. Due to its efficiency, its application is suitable to study large antennas radiation patterns and even its interactions with complex environments, including satellites, ships, aircrafts, cities or another complex electrically large bodies. The new technique appears to be extremely efficient at these applications even when considering multiple bounces.

  19. Pusher curving technique for preventing tilt of femoral Geunther Tulip inferior vena cava filter: in vitro study

    International Nuclear Information System (INIS)

    Xiao Liang; Shen Jing; Huang Desheng; Xu Ke

    2011-01-01

    Objective: To determine whether the adjustment of the pusher of GTF was useful to decrease the degree of tilting of the femoral Geunther Tulip filter (GTF) in an in vitro caval model. Methods: The caval model was constructed by placement of a 25 mm × 100 mm and two 10 mm × 200 mm Dacron graft inside a transparent bifurcate glass tube. The study consisted of two groups: left straight group (GLS) (n = 100) and left curved group (G LC ) (n=100). In the G LC , a 10° to 20° angle was curved on the introducer. The distance (D CH ) between the caval right wall and the hook was measured. The degree of tilting (DT) was classified into 5 grades and recorded. Before and after the GTF being released, the angle (A CM1,2 ) between the axis of IVC and the metal mount, the distance (D CM1 ) between the caval right wall and the metal mount, the angle (ACF) between the axis of IVC and the axis of the filter and the diameter of IVC (D IVC ) were measured. The data were analyzed with Chi-Square test, t test, rank sum. test and Pearson correlation test. Results: The degree of GTF tilting in each group revealed a divergent tendency. In group LC , the apex of the filter tended to be grade Ⅲ compared in group LS (χ 2 value 37.491, P LS and G LC were considered as statistical significance (16.60° vs. 3.05°, 20.60° vs. 3.50°, -3.90° vs. -0.40°, 2.98 mm vs. 10.40 mm, -10.95° vs. -0.485°, 13.17 mm vs. 10.06 mm, -1.70° vs. 0.70°, t or Z values -12.187, -12.188, -8.545, -51.834, -11.395, 9.562, -3.596, P CM1 and A CF , A CM1 - A CM2 and D CH1 - D CH2 in each group, respectively (r values 0.978, 0.344, 0.879, 0.627, P CH1 and A CF in each group, A CP and A CF in group LC (r values -0.974, -0.322, -0.702, P CM1 and A CF , A CM1 - A CM2 and D CH1 - D CH2 in each group, respectively (r values 0.978, 0.344, 0.879, 0.627, P CH1 and A CF in each group, A CP and A CF in group LC (r values -0.974, -0.322, -0.702, P<0.01). Conclusion: The technique of adjusting the orientation of filter

  20. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Barut, Murat, E-mail: muratbarut27@yahoo.co [Nigde University, Department of Electrical and Electronics Engineering, 51245 Nigde (Turkey)

    2010-10-15

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  1. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  2. Advanced gamma spectrum processing technique applied to the analysis of scattering spectra for determining material thickness

    International Nuclear Information System (INIS)

    Hoang Duc Tam; VNUHCM-University of Science, Ho Chi Minh City; Huynh Dinh Chuong; Tran Thien Thanh; Vo Hoang Nguyen; Hoang Thi Kieu Trang; Chau Van Tao

    2015-01-01

    In this work, an advanced gamma spectrum processing technique is applied to analyze experimental scattering spectra for determining the thickness of C45 heat-resistant steel plates. The single scattering peak of scattering spectra is taken as an advantage to measure the intensity of single scattering photons. Based on these results, the thickness of steel plates is determined with a maximum deviation of real thickness and measured thickness of about 4 %. Monte Carlo simulation using MCNP5 code is also performed to cross check the results, which yields a maximum deviation of 2 %. These results strongly confirm the capability of this technique in analyzing gamma scattering spectra, which is a simple, effective and convenient method for determining material thickness. (author)

  3. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  4. Correlation peak analysis applied to a sequence of images using two different filters for eye tracking model

    Science.gov (United States)

    Patrón, Verónica A.; Álvarez Borrego, Josué; Coronel Beltrán, Ángel

    2015-09-01

    Eye tracking has many useful applications that range from biometrics to face recognition and human-computer interaction. The analysis of the characteristics of the eyes has become one of the methods to accomplish the location of the eyes and the tracking of the point of gaze. Characteristics such as the contrast between the iris and the sclera, the shape, and distribution of colors and dark/light zones in the area are the starting point for these analyses. In this work, the focus will be on the contrast between the iris and the sclera, performing a correlation in the frequency domain. The images are acquired with an ordinary camera, which with were taken images of thirty-one volunteers. The reference image is an image of the subjects looking to a point in front of them at 0° angle. Then sequences of images are taken with the subject looking at different angles. These images are processed in MATLAB, obtaining the maximum correlation peak for each image, using two different filters. Each filter were analyzed and then one was selected, which is the filter that gives the best performance in terms of the utility of the data, which is displayed in graphs that shows the decay of the correlation peak as the eye moves progressively at different angle. This data will be used to obtain a mathematical model or function that establishes a relationship between the angle of vision (AOV) and the maximum correlation peak (MCP). This model will be tested using different input images from other subject not contained in the initial database, being able to predict angle of vision using the maximum correlation peak data.

  5. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  6. D-stem: a parallel electron diffraction technique applied to nanomaterials.

    Science.gov (United States)

    Ganesh, K J; Kawasaki, M; Zhou, J P; Ferreira, P J

    2010-10-01

    An electron diffraction technique called D-STEM has been developed in a transmission electron microscopy/scanning transmission electron microscopy (TEM/STEM) instrument to obtain spot electron diffraction patterns from nanostructures, as small as ∼3 nm. The electron ray path achieved by configuring the pre- and postspecimen illumination lenses enables the formation of a 1-2 nm near-parallel probe, which is used to obtain bright-field/dark-field STEM images. Under these conditions, the beam can be controlled and accurately positioned on the STEM image, at the nanostructure of interest, while sharp spot diffraction patterns can be simultaneously recorded on the charge-coupled device camera. When integrated with softwares such as GatanTM STEM diffraction imaging and Automated Crystallography for TEM or DigistarTM, NanoMEGAS, the D-STEM technique is very powerful for obtaining automated orientation and phase maps based on diffraction information acquired on a pixel by pixel basis. The versatility of the D-STEM technique is demonstrated by applying this technique to nanoparticles, nanowires, and nano interconnect structures.

  7. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    Science.gov (United States)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  8. MR angiography with a matched filter

    International Nuclear Information System (INIS)

    De Castro, J.B.; Riederer, S.J.; Lee, J.N.

    1987-01-01

    The technique of matched filtering was applied to a series of cine MR images. The filter was devised to yield a subtraction angiographic image in which direct current components present in the cine series are removed and the signal-to-noise ratio (S/N) of the vascular structures is optimized. The S/N of a matched filter was compared with that of a simple subtraction, in which an image with high flow is subtracted from one with low flow. Experimentally, a range of results from minimal improvement to significant (60%) improvement in S/N was seen in the comparisons of matched filtered subtraction with simple subtraction

  9. The evaluation of properties for radiation therapy techniques with flattening filter-free beam and usefulness of time and economy to a patient with the radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Goo, Jang Hyeon; Won, Hui Su; Hong, Joo Wan; Chang, Nam Jun; Park, Jin Hong [Dept. of Radiation Oncology, Seoul national university Bundang hospital, Sungnam (Korea, Republic of)

    2014-12-15

    The aim of this study was to appraise properties for radiation therapy techniques and effectiveness of time and economy to a patient in the case of applying flattening filter-free (3F) and flattening filter (2F) beam to the radiation therapy. Alderson rando phantom was scanned for computed tomography image. Treatment plans for intensity modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT) and stereotactic body radiation therapy (SBRT) with 3F and 2F beam were designed for prostate cancer. To evaluate the differences between the 3F and 2F beam, total monitor units (MUs), beam on time (BOT) and gantry rotation time (GRT) were used and measured with TrueBeam{sup TM} STx and Surveillance And Measurement (SAM) 940 detector was used for photoneutron emitted by using 3F and 2F. To assess temporal and economical aspect for a patient, total treatment periods and medical fees were estimated. In using 3F beam, total MUs in IMRT plan increased the highest up to 34.0% and in the test of BOT, GRT and photoneutron, the values in SBRT plan decreased the lowest 39.8, 38.6 and 48.1%, respectively. In the temporal and economical aspect, there were no differences between 3F and 2F beam in all of plans and the results showed that 10 days and 169,560 won was lowest in SBRT plan. According as the results, total MUs increased by using 3F beam than 2F beam but BOT, GRT and photoneutron decreased. From above the results, using 3F beam can decrease intra-fraction setup error and risk of radiation-induced secondary malignancy. But, using 3F beam did not make the benefits of temporal and economical aspect for a patient with the radiation therapy.

  10. Case study: how to apply data mining techniques in a healthcare data warehouse.

    Science.gov (United States)

    Silver, M; Sakata, T; Su, H C; Herman, C; Dolins, S B; O'Shea, M J

    2001-01-01

    Healthcare provider organizations are faced with a rising number of financial pressures. Both administrators and physicians need help analyzing large numbers of clinical and financial data when making decisions. To assist them, Rush-Presbyterian-St. Luke's Medical Center and Hitachi America, Ltd. (HAL), Inc., have partnered to build an enterprise data warehouse and perform a series of case study analyses. This article focuses on one analysis, which was performed by a team of physicians and computer science researchers, using a commercially available on-line analytical processing (OLAP) tool in conjunction with proprietary data mining techniques developed by HAL researchers. The initial objective of the analysis was to discover how to use data mining techniques to make business decisions that can influence cost, revenue, and operational efficiency while maintaining a high level of care. Another objective was to understand how to apply these techniques appropriately and to find a repeatable method for analyzing data and finding business insights. The process used to identify opportunities and effect changes is described.

  11. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. I. Formulation and testing

    Science.gov (United States)

    Fehl, D. L.; Chandler, G. A.; Stygar, W. A.; Olson, R. E.; Ruiz, C. L.; Hohlfelder, J. J.; Mix, L. P.; Biggs, F.; Berninger, M.; Frederickson, P. O.; Frederickson, R.

    2010-12-01

    test and unfolded spectra increasingly diverged as larger fractions of Sbb(E,T) fell below the detection threshold (˜137eV) of the diagnostic. (c) Comparison with other analyses and diagnostics.—The results of the histogram algorithm are compared with other analyses, including a test with data acquired by the DANTE filtered-XRD array at the NOVA laser facility. Overall, the histogram algorithm is found to be most useful for x-ray flux estimates, as opposed to spectral details. The following companion paper [D. L. Fehl , Phys. Rev. ST Accel. Beams 13, 120403 (2010)PRABFM1098-4402] considers (a) uncertainties in Sunfold and Funfold induced by both data noise and calibrational errors in the response functions; and (b) generalization of the algorithm to arbitrary spectra. These techniques apply to other diagnostics with analogous channel responses and supported by unfold algorithms of invertible matrix form.

  12. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. I. Formulation and testing

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    -ray flux over the wider range, 75≤T≤250  eV. For lower T, the test and unfolded spectra increasingly diverged as larger fractions of S_{bb}(E,T fell below the detection threshold (∼137  eV of the diagnostic. (c Comparison with other analyses and diagnostics.—The results of the histogram algorithm are compared with other analyses, including a test with data acquired by the DANTE filtered-XRD array at the NOVA laser facility. Overall, the histogram algorithm is found to be most useful for x-ray flux estimates, as opposed to spectral details. The following companion paper [D. L. Fehl et al., Phys. Rev. ST Accel. Beams 13, 120403 (2010PRABFM1098-4402] considers (a uncertainties in S_{unfold} and F_{unfold} induced by both data noise and calibrational errors in the response functions; and (b generalization of the algorithm to arbitrary spectra. These techniques apply to other diagnostics with analogous channel responses and supported by unfold algorithms of invertible matrix form.

  13. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Science.gov (United States)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  14. Spatial filtering technique to image and measure two-dimensional near-forward scattering from single particles.

    Science.gov (United States)

    Berg, Matthew J; Hill, Steven C; Videen, Gorden; Gurton, Kristan P

    2010-04-26

    This work describes the design and use of an optical apparatus to measure the far-field elastic light-scattering pattern for a single particle over two angular-dimensions. A spatial filter composed of a mirror with a small through-hole is used to enable collection of the pattern uncommonly close to the forward direction; to within tenths of a degree. Minor modifications of the design allow for the simultaneous measurement of a particle's image along with its two-dimensional scattering pattern. Example measurements are presented involving single micrometer-sized glass spherical particles confined in an electrodynamic trap and a dilute suspension of polystyrene latex particles in water. A small forward-angle technique, called Guinier analysis, is used to determine a particle-size estimate directly from the measured pattern without a priori knowledge of the particle refractive index. Comparison of these size estimates to those obtained by fitting the measurements to Mie theory reveals relative errors low as 2%.

  15. The electrocatalytic oxidation of carbohydrates at a nickel/carbon paper electrode fabricated by the filtered cathodic vacuum arc technique

    International Nuclear Information System (INIS)

    Fu, Yingyi; Wang, Tong; Su, Wen; Yu, Yanan; Hu, Jingbo

    2015-01-01

    The direct electrochemical behaviour of carbohydrates at a nickel/carbon paper electrode with a novel fabrication method is investigated. The investigation is used for verification the feasibility of using monosaccharides and disaccharides in the application of fuel cell. The selected monosaccharides are glucose, fructose and galactose; the disaccharides are sucrose, maltose and lactose. The modified nickel/carbon paper electrode was prepared using a filtered cathodic vacuum arc technique. The morphology image of the nickel thin film on the carbon paper surface was characterized by scanning electron microscopy (SEM). The existence of nickel was verified by X-ray photoelectron spectroscopy (XPS). The contact angle measurement was also used to characterize the modified electrode. Cyclic voltammetry (CV) was employed to evaluate the electrochemical behaviour of monosaccharides and disaccharides in an alkaline aqueous solution. The modified electrode exhibits good electrocatalytic activities towards carbohydrates. In addition, the stability of the nickel/carbon paper electrode with six sugars was also investigated. The good catalytic effects of the nickel/carbon paper electrode allow for the use of carbohydrates as fuels in fuel cell applications

  16. Adaptive filtering techniques for gravitational wave interferometric data: Removing long-term sinusoidal disturbances and oscillatory transients

    Science.gov (United States)

    Chassande-Mottin, E.; Dhurandhar, S. V.

    2001-02-01

    It is known by the experience gained from the gravitational wave detector prototypes that the interferometric output signal will be corrupted by a significant amount of non-Gaussian noise, a large part of it being essentially composed of long-term sinusoids with a slowly varying envelope (such as violin resonances in the suspensions, or main power harmonics) and short-term ringdown noise (which may emanate from servo control systems, electronics in a nonlinear state, etc.). Since non-Gaussian noise components make the detection and estimation of the gravitational wave signature more difficult, a denoising algorithm based on adaptive filtering techniques (LMS methods) is proposed to separate and extract them from the stationary and Gaussian background noise. The strength of the method is that it does not require any precise model on the observed data: the signals are distinguished on the basis of their autocorrelation time. We believe that the robustness and simplicity of this method make it useful for data preparation and for the understanding of the first interferometric data. We present the detailed structure of the algorithm and its application to both simulated data and real data from the LIGO 40 m prototype.

  17. Automatic segmentation of airway tree based on local intensity filter and machine learning technique in 3D chest CT volume.

    Science.gov (United States)

    Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku

    2017-02-01

    Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.

  18. Determination of hydrogen diffusivity and permeability in W near room temperature applying a tritium tracer technique

    International Nuclear Information System (INIS)

    Ikeda, T.; Otsuka, T.; Tanabe, T.

    2011-01-01

    Tungsten is a primary candidate of plasma facing material in ITER and beyond, owing to its good thermal property and low erosion. But hydrogen solubility and diffusivity near ITER operation temperatures (below 500 K) have scarcely studied. Mainly because its low hydrogen solubility and diffusivity at lower temperatures make the detection of hydrogen quite difficult. We have tried to observe hydrogen plasma driven permeation (PDP) through nickel and tungsten near room temperatures applying a tritium tracer technique, which is extremely sensible to detect tritium diluted in hydrogen. The apparent diffusion coefficients for PDP were determined by permeation lag times at first time, and those for nickel and tungsten were similar or a few times larger than those for gas driven permeation (GDP). The permeation rates for PDP in nickel and tungsten were larger than those for GDP normalized to the same gas pressure about 20 and 5 times larger, respectively.

  19. Vibration monitoring/diagnostic techniques, as applied to reactor coolant pumps

    International Nuclear Information System (INIS)

    Sculthorpe, B.R.; Johnson, K.M.

    1986-01-01

    With the increased awareness of reactor coolant pump (RCP) cracked shafts, brought about by the catastrophic shaft failure at Crystal River number3, Florida Power and Light Company, in conjunction with Bently Nevada Corporation, undertook a test program at St. Lucie Nuclear Unit number2, to confirm the integrity of all four RCP pump shafts. Reactor coolant pumps play a major roll in the operation of nuclear-powered generation facilities. The time required to disassemble and physically inspect a single RCP shaft would be lengthy, monetarily costly to the utility and its customers, and cause possible unnecessary man-rem exposure to plant personnel. When properly applied, vibration instrumentation can increase unit availability/reliability, as well as provide enhanced diagnostic capability. This paper reviews monitoring benefits and diagnostic techniques applicable to RCPs/motor drives

  20. A systematic review of applying modern software engineering techniques to developing robotic systems

    Directory of Open Access Journals (Sweden)

    Claudia Pons

    2012-01-01

    Full Text Available Robots have become collaborators in our daily life. While robotic systems become more and more complex, the need to engineer their software development grows as well. The traditional approaches used in developing these software systems are reaching their limits; currently used methodologies and tools fall short of addressing the needs of such complex software development. Separating robotics’ knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. This paper presents a systematic review (SLR of the current use of modern software engineering techniques for developing robotic software systems and their actual automation level. The survey was aimed at summarizing existing evidence concerning applying such technologies to the field of robotic systems to identify any gaps in current research to suggest areas for further investigation and provide a background for positioning new research activities.

  1. Clutter filter design for ultrasound color flow imaging.

    Science.gov (United States)

    Bjaerum, Steinar; Torp, Hans; Kristoffersen, Kjell

    2002-02-01

    For ultrasound color flow images with high quality, it is important to suppress the clutter signals originating from stationary and slowly moving tissue sufficiently. Without sufficient clutter rejection, low velocity blood flow cannot be measured, and estimates of higher velocities will have a large bias. The small number of samples available (8 to 16) makes clutter filtering in color flow imaging a challenging problem. In this paper, we review and analyze three classes of filters: finite impulse response (FIR), infinite impulse response (IIR), and regression filters. The quality of the filters was assessed based on the frequency response, as well as on the bias and variance of a mean blood velocity estimator using an autocorrelation technique. For FIR filters, the frequency response was improved by allowing a non-linear phase response. By estimating the mean blood flow velocity from two vectors filtered in the forward and backward direction, respectively, the standard deviation was significantly lower with a minimum phase filter than with a linear phase filter. For IIR filters applied to short signals, the transient part of the output signal is important. We analyzed zero, step, and projection initialization, and found that projection initialization gave the best filters. For regression filters, polynomial basis functions provide effective clutter suppression. The best filters from each of the three classes gave comparable bias and variance of the mean blood velocity estimates. However, polynomial regression filters and projection-initialized IIR filters had a slightly better frequency response than could be obtained with FIR filters.

  2. Imaging and pattern recognition techniques applied to particulate solids material characterization in mineral processing

    International Nuclear Information System (INIS)

    Bonifazi, G.; La Marca, F.; Massacci, P.

    1999-01-01

    The characterization of particulate solids can be carried out by chemical and mineralogical analysis, or, in some cases, following a new approach based on the combined use of: i) imaging techniques to detect the surface features of the particles, and ii) pattern recognition procedures, to identify and classify the mineralogical composition on the bases of the previously detected 'pictorial' features. The aim of this methodology is to establish a correlation between image parameters (texture and color) and physical chemical parameters characterizing the set of particles to be evaluated. The technique was applied to characterize the raw-ore coming from a deposit of mineral sands of three different lithotypes. An appropriate number of samples for each lithotype has been collected. A vector of attributes (pattern vector), by either texture and color parameters, has been associated to each sample. Image analysis demonstrated as the selected parameters are quite sensitive to the conditions of image acquisition: in fact optical properties may be strongly influenced by physical condition, in terms of moisture content and optics set-up and lighting conditions. Standard conditions for acquisition have been selected according to the in situ conditions during sampling. To verify the reliability of the proposed methodology, images have been acquired under different conditions of humidity, focusing and illumination. In order to evaluate the influence of these parameters on image pictorial properties, textural analysis procedures have been applied to the image acquired from different samples. Data resulting from the processing have been used for remote controlling of the material fed to the mineral processing plant. (author)

  3. Automated method for simultaneous lead and strontium isotopic analysis applied to rainwater samples and airborne particulate filters (PM10).

    Science.gov (United States)

    Beltrán, Blanca; Avivar, Jessica; Mola, Montserrat; Ferrer, Laura; Cerdà, Víctor; Leal, Luz O

    2013-09-03

    A new automated, sensitive, and fast system for the simultaneous online isolation and preconcentration of lead and strontium by sorption on a microcolumn packed with Sr-resin using an inductively coupled plasma mass spectrometry (ICP-MS) detector was developed, hyphenating lab-on-valve (LOV) and multisyringe flow injection analysis (MSFIA). Pb and Sr are directly retained on the sorbent column and eluted with a solution of 0.05 mol L(-1) ammonium oxalate. The detection limits achieved were 0.04 ng for lead and 0.03 ng for strontium. Mass calibration curves were used since the proposed system allows the use of different sample volumes for preconcentration. Mass linear working ranges were between 0.13 and 50 ng and 0.1 and 50 ng for lead and strontium, respectively. The repeatability of the method, expressed as RSD, was 2.1% and 2.7% for Pb and Sr, respectively. Environmental samples such as rainwater and airborne particulate (PM10) filters as well as a certified reference material SLRS-4 (river water) were satisfactorily analyzed obtaining recoveries between 90 and 110% for both elements. The main features of the LOV-MSFIA-ICP-MS system proposed are the capability to renew solid phase extraction at will in a fully automated way, the remarkable stability of the column which can be reused up to 160 times, and the potential to perform isotopic analysis.

  4. How Can Synchrotron Radiation Techniques Be Applied for Detecting Microstructures in Amorphous Alloys?

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available In this work, how synchrotron radiation techniques can be applied for detecting the microstructure in metallic glass (MG is studied. The unit cells are the basic structural units in crystals, though it has been suggested that the co-existence of various clusters may be the universal structural feature in MG. Therefore, it is a challenge to detect microstructures of MG even at the short-range scale by directly using synchrotron radiation techniques, such as X-ray diffraction and X-ray absorption methods. Here, a feasible scheme is developed where some state-of-the-art synchrotron radiation-based experiments can be combined with simulations to investigate the microstructure in MG. By studying a typical MG composition (Zr70Pd30, it is found that various clusters do co-exist in its microstructure, and icosahedral-like clusters are the popular structural units. This is the structural origin where there is precipitation of an icosahedral quasicrystalline phase prior to phase transformation from glass to crystal when heating Zr70Pd30 MG.

  5. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  6. A numerical study of different projection-based model reduction techniques applied to computational homogenisation

    Science.gov (United States)

    Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia

    2017-10-01

    Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.

  7. Investigation of finite difference recession computation techniques applied to a nonlinear recession problem

    Energy Technology Data Exchange (ETDEWEB)

    Randall, J D

    1978-03-01

    This report presents comparisons of results of five implicit and explicit finite difference recession computation techniques with results from a more accurate ''benchmark'' solution applied to a simple one-dimensional nonlinear ablation problem. In the comparison problem a semi-infinite solid is subjected to a constant heat flux at its surface and the rate of recession is controlled by the solid material's latent heat of fusion. All thermal properties are assumed constant. The five finite difference methods include three front node dropping schemes, a back node dropping scheme, and a method in which the ablation problem is embedded in an inverse heat conduction problem and no nodes are dropped. Constancy of thermal properties and the semiinfinite and one-dimensional nature of the problem at hand are not necessary assumptions in applying the methods studied to more general problems. The best of the methods studied will be incorporated into APL's Standard Heat Transfer Program.

  8. MULTIVARIATE TECHNIQUES APPLIED TO EVALUATION OF LIGNOCELLULOSIC RESIDUES FOR BIOENERGY PRODUCTION

    Directory of Open Access Journals (Sweden)

    Thiago de Paula Protásio

    2013-12-01

    Full Text Available http://dx.doi.org/10.5902/1980509812361The evaluation of lignocellulosic wastes for bioenergy production demands to consider several characteristicsand properties that may be correlated. This fact demands the use of various multivariate analysis techniquesthat allow the evaluation of relevant energetic factors. This work aimed to apply cluster analysis and principalcomponents analyses for the selection and evaluation of lignocellulosic wastes for bioenergy production.8 types of residual biomass were used, whose the elemental components (C, H, O, N, S content, lignin, totalextractives and ashes contents, basic density and higher and lower heating values were determined. Bothmultivariate techniques applied for evaluation and selection of lignocellulosic wastes were efficient andsimilarities were observed between the biomass groups formed by them. Through the interpretation of thefirst principal component obtained, it was possible to create a global development index for the evaluationof the viability of energetic uses of biomass. The interpretation of the second principal component alloweda contrast between nitrogen and sulfur contents with oxygen content.

  9. Bioclimatic and vegetation mapping of a topographically complex oceanic island applying different interpolation techniques

    Science.gov (United States)

    Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José

    2014-07-01

    Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R 2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data.

  10. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2007-01-01

    Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that today’s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher’s interest in the fusion of intelligent sensors and optimal signal processing techniques.

  11. Filters in topology optimization based on Helmholtz‐type differential equations

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    from the neighbor subdomains is an expensive operation. The proposed filter technique requires only mesh information necessary for the finite element discretization of the problem. The main idea is to define the filtered variable implicitly as a solution of a Helmholtz‐type differential equation......The aim of this paper is to apply a Helmholtz‐type partial differential equation as an alternative to standard density filtering in topology optimization problems. Previously, this approach has been successfully applied as a sensitivity filter. The usual filtering techniques in topology...

  12. Virtual analysis of influence of a filter on mould filling

    Directory of Open Access Journals (Sweden)

    Zhian Xu

    2011-11-01

    Full Text Available Ceramic filters are used to avoid slag and impurities in foundry applications. When not properly applied, the presence of these filters may have a significant influence on mould filling. 3-D casting simulation has been applied to study the effects of the use of a ceramic filter on the metal flow in a gating system. Instead of using a pressure drop model to represent the behaviour of a fluid metal flow passing through a filter, a real exact filter geometry, which is created by a high resolution CT-scan and a non-destructive imaging technique, in the gating system is applied in the simulation. In this research, nodular cast iron is poured into a block casting. A depressurized gating system is used. After a choke, a filter with different orientations is placed in the system. Mould filling coupled with temperature is simulated. Geometries using different orientations of the filter, and without the filter have been researched. The simulated results show that the filter has no influence on the pouring time of the casting if the choke section is small enough compared to the effective section of the filter. Although the filter has no significant influence on the flow patterns in the block casting itself, the flow patterns in the filter zone are different. When the liquid metal passes a horizontal filter, it will be broken into many small streams and show a shower effect. After the part under the filter is full, the shower effect disappears. When the filter is located at the vertical position, due to the gravity, the shower effect is less. If no filter presents on the system, the liquid metal passes through the filter zone with a high speed and causes surface turbulence.

  13. Edge-filter technique and dominant frequency analysis for high-speed railway monitoring with fiber Bragg gratings

    Science.gov (United States)

    Kouroussis, Georges; Kinet, Damien; Mendoza, Edgar; Dupuy, Julien; Moeyaert, Véronique; Caucheteur, Christophe

    2016-07-01

    Structural health and operation monitoring are of growing interest in the development of railway networks. Conventional systems of infrastructure monitoring already exist (e.g. axle counters, track circuits) but present some drawbacks. Alternative solutions are therefore studied and developed. In this field, optical fiber sensors, and more particularly fiber Bragg grating (FBG) sensors, are particularly relevant due to their immunity to electromagnetic fields and simple wavelength-division-multiplexing capability. Field trials conducted up to now have demonstrated that FBG sensors provide useful information about train composition, positioning, speed, acceleration and weigh-in-motion estimations. Nevertheless, for practical deployment, cost-effectiveness should be ensured, specifically at the interrogator side that has also to be fast (>1 kHz repetition rate), accurate (∼1 pm wavelength shift) and reliable. To reach this objective, we propose in this paper to associate a low cost and high-speed interrogator coupled with an adequate signal-processing algorithm to dynamically monitor cascaded wavelength-multiplexed FBGs and to accurately capture the parameters of interest for railway traffic monitoring. This method has been field-tested with a Redondo Optics Inc. interrogator based on the well-known edge-filter demodulation technique. To determine the train speed from the raw data, a dominant frequency analysis has been implemented. Using this original method, we show that we can retrieve the speed of the trains, even when the time history strain signature is strongly affected by the measurement noise. The results are assessed by complimentary data obtained from a spectrometer-based FBG interrogator.

  14. Modern structure of methods and techniques of marketing research, applied by the world and Ukrainian research companies

    Directory of Open Access Journals (Sweden)

    Bezkrovnaya Yulia

    2015-08-01

    Full Text Available The article presents the results of empiric justification of the structure of methods and techniques of marketing research of consumer decisions, applied by the world and Ukrainian research companies.

  15. Learning mediastinoscopy: the need for education, experience and modern techniques--interdependency of the applied technique and surgeon's training level.

    Science.gov (United States)

    Walles, Thorsten; Friedel, Godehard; Stegherr, Tobias; Steger, Volker

    2013-04-01

    Mediastinoscopy represents the gold standard for invasive mediastinal staging. While learning and teaching the surgical technique are challenging due to the limited accessibility of the operation field, both benefited from the implementation of video-assisted techniques. However, it has not been established yet whether video-assisted mediastinoscopy improves the mediastinal staging in itself. Retrospective single-centre cohort analysis of 657 mediastinoscopies performed at a specialized tertiary care thoracic surgery unit from 1994 to 2006. The number of specimens obtained per procedure and per lymph node station (2, 4, 7, 8 for mediastinoscopy and 2-9 for open lymphadenectomy), the number of lymph node stations examined, sensitivity and negative predictive value with a focus on the technique employed (video-assisted vs standard technique) and the surgeon's experience were calculated. Overall sensitivity was 60%, accuracy was 90% and negative predictive value 88%. With the conventional technique, experience alone improved sensitivity from 49 to 57% and it was predominant at the paratracheal right region (from 62 to 82%). But with the video-assisted technique, experienced surgeons rose sensitivity from 57 to 79% in contrast to inexperienced surgeons who lowered sensitivity from 49 to 33%. We found significant differences concerning (i) the total number of specimens taken, (ii) the amount of lymph node stations examined, (iii) the number of specimens taken per lymph node station and (iv) true positive mediastinoscopies. The video-assisted technique can significantly improve the results of mediastinoscopy. A thorough education on the modern video-assisted technique is mandatory for thoracic surgeons until they can fully exhaust its potential.

  16. A single source microwave photonic filter using a novel single-mode fiber to multimode fiber coupling technique.

    Science.gov (United States)

    Chang, John; Fok, Mable P; Meister, James; Prucnal, Paul R

    2013-03-11

    In this paper we present a fully tunable and reconfigurable single-laser multi-tap microwave photonic FIR filter that utilizes a special SM-to-MM combiner to sum the taps. The filter requires only a single laser source for all the taps and a passive component, a SM-to-MM combiner, for incoherent summing of signal. The SM-to-MM combiner does not produce optical interference during signal merging and is phase-insensitive. We experimentally demonstrate an eight-tap filter with both positive and negative programmable coefficients with excellent correspondence between predicted and measured values. The magnitude response shows a clean and accurate function across the entire bandwidth, and proves successful operation of the FIR filter using a SM-to-MM combiner.

  17. Construction Techniques for LC Highpass and Lowpass Filters Used in the 1 MHz to 1 GHz Frequency Range

    National Research Council Canada - National Science Library

    Martinsen, W

    2003-01-01

    .... It examines limitations in the frequency domain of the two basic components, inductors and capacitors, used to build these filters, the unwanted effects of the distributed reactance of opposite sign...

  18. Situational Awareness Applied to Geology Field Mapping using Integration of Semantic Data and Visualization Techniques

    Science.gov (United States)

    Houser, P. I. Q.

    2017-12-01

    21st century earth science is data-intensive, characterized by heterogeneous, sometimes voluminous collections representing phenomena at different scales collected for different purposes and managed in disparate ways. However, much of the earth's surface still requires boots-on-the-ground, in-person fieldwork in order to detect the subtle variations from which humans can infer complex structures and patterns. Nevertheless, field experiences can and should be enabled and enhanced by a variety of emerging technologies. The goal of the proposed research project is to pilot test emerging data integration, semantic and visualization technologies for evaluation of their potential usefulness in the field sciences, particularly in the context of field geology. The proposed project will investigate new techniques for data management and integration enabled by semantic web technologies, along with new techniques for augmented reality that can operate on such integrated data to enable in situ visualization in the field. The research objectives include: Develop new technical infrastructure that applies target technologies to field geology; Test, evaluate, and assess the technical infrastructure in a pilot field site; Evaluate the capabilities of the systems for supporting and augmenting field science; and Assess the generality of the system for implementation in new and different types of field sites. Our hypothesis is that these technologies will enable what we call "field science situational awareness" - a cognitive state formerly attained only through long experience in the field - that is highly desirable but difficult to achieve in time- and resource-limited settings. Expected outcomes include elucidation of how, and in what ways, these technologies are beneficial in the field; enumeration of the steps and requirements to implement these systems; and cost/benefit analyses that evaluate under what conditions the investments of time and resources are advisable to construct

  19. Electrochemical microfluidic chip based on molecular imprinting technique applied for therapeutic drug monitoring.

    Science.gov (United States)

    Liu, Jiang; Zhang, Yu; Jiang, Min; Tian, Liping; Sun, Shiguo; Zhao, Na; Zhao, Feilang; Li, Yingchun

    2017-05-15

    In this work, a novel electrochemical detection platform was established by integrating molecularly imprinting technique with microfluidic chip and applied for trace measurement of three therapeutic drugs. The chip foundation is acrylic panel with designed grooves. In the detection cell of the chip, a Pt wire is used as the counter electrode and reference electrode, and a Au-Ag alloy microwire (NPAMW) with 3D nanoporous surface modified with electro-polymerized molecularly imprinted polymer (MIP) film as the working electrode. Detailed characterization of the chip and the working electrode was performed, and the properties were explored by cyclic voltammetry and electrochemical impedance spectroscopy. Two methods, respectively based on electrochemical catalysis and MIP/gate effect were employed for detecting warfarin sodium by using the prepared chip. The linearity of electrochemical catalysis method was in the range of 5×10 -6 -4×10 -4 M, which fails to meet clinical testing demand. By contrast, the linearity of gate effect was 2×10 -11 -4×10 -9 M with remarkably low detection limit of 8×10 -12 M (S/N=3), which is able to satisfy clinical assay. Then the system was applied for 24-h monitoring of drug concentration in plasma after administration of warfarin sodium in rabbit, and the corresponding pharmacokinetic parameters were obtained. In addition, the microfluidic chip was successfully adopted to analyze cyclophosphamide and carbamazepine, implying its good versatile ability. It is expected that this novel electrochemical microfluidic chip can act as a promising format for point-of-care testing via monitoring different analytes sensitively and conveniently. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Analysis of an effective optical filtering technique to enhance microwave phase shifts based on slow and fast light effects

    DEFF Research Database (Denmark)

    Chen, Yaohui; Öhman, Filip; Xue, Weiqi

    2008-01-01

    We theoretically analyze and interpret an effective mechanism, which employs optical filtering to enhance the microwave phase shift that can be achieved in semiconductor optical amplifiers based on slow and fast light effects.......We theoretically analyze and interpret an effective mechanism, which employs optical filtering to enhance the microwave phase shift that can be achieved in semiconductor optical amplifiers based on slow and fast light effects....

  1. The simulation of Typhoon-induced coastal inundation in Busan, South Korea applying the downscaling technique

    Science.gov (United States)

    Jang, Dongmin; Park, Junghyun; Yuk, Jin-Hee; Joh, MinSu

    2017-04-01

    Due to typhoons, the south coastal cities including Busan in South Korea coastal are very vulnerable to a surge, wave and corresponding coastal inundation, and are affected every year. In 2016, South Korea suffered tremendous damage by typhoon 'Chaba', which was developed near east-north of Guam on Sep. 28 and had maximum 10-minute sustained wind speed of about 50 m/s, 1-minute sustained wind speed of 75 m/s and a minimum central pressure of 905 hpa. As 'Chaba', which is the strongest since typhoon 'Maemi' in 2003, hit South Korea on Oct. 5, it caused a massive economic and casualty damage to Ulsan, Gyeongju and Busan in South Korea. In particular, the damage of typhoon-induced coastal inundation in Busan, where many high-rise buildings and residential areas are concentrated near coast, was serious. The coastal inundation could be more affected by strong wind-induced wave than surge. In fact, it was observed that the surge height was about 1 m averagely and a significant wave height was about 8 m at coastal sea nearby Busan on Oct. 5 due to 'Chaba'. Even though the typhoon-induced surge elevated the sea level, the typhoon-induced long period wave with wave period of more than 15s could play more important role in the inundation. The present work simulated the coastal inundation induced by 'Chaba' in Busan, South Korea considering the effects of typhoon-induced surge and wave. For 'Chaba' hindcast, high resolution Weather Research and Forecasting model (WRF) was applied using a reanalysis data produced by NCEP (FNL 0.25 degree) on the boundary and initial conditions, and was validated by the observation of wind speed, direction and pressure. The typhoon-induced coastal inundation was simulated by an unstructured gird model, Finite Volume Community Ocean Model (FVCOM), which is fully current-wave coupled model. To simulate the wave-induced inundation, 1-way downscaling technique of multi domain was applied. Firstly, a mother's domain including Korean peninsula was

  2. Nonbinary quantification technique accounting for myocardial infarct heterogeneity: Feasibility of applying percent infarct mapping in patients.

    Science.gov (United States)

    Mastrodicasa, Domenico; Elgavish, Gabriel A; Schoepf, U Joseph; Suranyi, Pal; van Assen, Marly; Albrecht, Moritz H; De Cecco, Carlo N; van der Geest, Rob J; Hardy, Rayphael; Mantini, Cesare; Griffith, L Parkwood; Ruzsics, Balazs; Varga-Szemes, Akos

    2018-02-15

    Binary threshold-based quantification techniques ignore myocardial infarct (MI) heterogeneity, yielding substantial misquantification of MI. To assess the technical feasibility of MI quantification using percent infarct mapping (PIM), a prototype nonbinary algorithm, in patients with suspected MI. Prospective cohort POPULATION: Patients (n = 171) with suspected MI referred for cardiac MRI. Inversion recovery balanced steady-state free-precession for late gadolinium enhancement (LGE) and modified Look-Locker inversion recovery (MOLLI) T 1 -mapping on a 1.5T system. Infarct volume (IV) and infarct fraction (IF) were quantified by two observers based on manual delineation, binary approaches (2-5 standard deviations [SD] and full-width at half-maximum [FWHM] thresholds) in LGE images, and by applying the PIM algorithm in T 1 and LGE images (PIM T1 ; PIM LGE ). IV and IF were analyzed using repeated measures analysis of variance (ANOVA). Agreement between the approaches was determined with Bland-Altman analysis. Interobserver agreement was assessed by intraclass correlation coefficient (ICC) analysis. MI was observed in 89 (54.9%) patients, and 185 (38%) short-axis slices. IF with 2, 3, 4, 5SDs and FWHM techniques were 15.7 ± 6.6, 13.4 ± 5.6, 11.6 ± 5.0, 10.8 ± 5.2, and 10.0 ± 5.2%, respectively. The 5SD and FWHM techniques had the best agreement with manual IF (9.9 ± 4.8%) determination (bias 1.0 and 0.2%; P = 0.1426 and P = 0.8094, respectively). The 2SD and 3SD algorithms significantly overestimated manual IF (9.9 ± 4.8%; both P < 0.0001). PIM LGE measured significantly lower IF (7.8 ± 3.7%) compared to manual values (P < 0.0001). PIM LGE , however, showed the best agreement with the PIM T1 reference (7.6 ± 3.6%, P = 0.3156). Interobserver agreement was rated good to excellent for IV (ICCs between 0.727-0.820) and fair to good for IF (0.589-0.736). The application of the PIM LGE technique for MI

  3. An iterative ensemble Kalman filter for reservoir engineering applications

    NARCIS (Netherlands)

    Krymskaya, M.V.; Hanea, R.G.; Verlaan, M.

    2009-01-01

    The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the

  4. Adaptive prediction applied to seismic event detection

    International Nuclear Information System (INIS)

    Clark, G.A.; Rodgers, P.W.

    1981-01-01

    Adaptive prediction was applied to the problem of detecting small seismic events in microseismic background noise. The Widrow-Hoff LMS adaptive filter used in a prediction configuration is compared with two standard seismic filters as an onset indicator. Examples demonstrate the technique's usefulness with both synthetic and actual seismic data

  5. Adaptive prediction applied to seismic event detection

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Rodgers, P.W.

    1981-09-01

    Adaptive prediction was applied to the problem of detecting small seismic events in microseismic background noise. The Widrow-Hoff LMS adaptive filter used in a prediction configuration is compared with two standard seismic filters as an onset indicator. Examples demonstrate the technique's usefulness with both synthetic and actual seismic data.

  6. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    CERN Document Server

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  7. The Study of Mining Activities and their Influences in the Almaden Region Applying Remote Sensing Techniques

    International Nuclear Information System (INIS)

    Rico, C.; Schmid, T.; Millan, R.; Gumuzzio, J.

    2010-01-01

    This scientific-technical report is a part of an ongoing research work carried out by Celia Rico Fraile in order to obtain the Diploma of Advanced Studies as part of her PhD studies. This work has been developed in collaboration with the Faculty of Science at The Universidad Autonoma de Madrid and the Department of Environment at CIEMAT. The main objective of this work was the characterization and classification of land use in Almaden (Ciudad Real) during cinnabar mineral exploitation and after mining activities ceased in 2002, developing a methodology focused on the integration of remote sensing techniques applying multispectral and hyper spectral satellite data. By means of preprocessing and processing of data from the satellite images as well as data obtained from field campaigns, a spectral library was compiled in order to obtain representative land surfaces within the study area. Monitoring results show that the distribution of areas affected by mining activities is rapidly diminishing in recent years. (Author) 130 refs

  8. Spatial analysis techniques applied to uranium prospecting in Chihuahua State, Mexico

    Science.gov (United States)

    Hinojosa de la Garza, Octavio R.; Montero Cabrera, María Elena; Sanín, Luz H.; Reyes Cortés, Manuel; Martínez Meyer, Enrique

    2014-07-01

    To estimate the distribution of uranium minerals in Chihuahua, the advanced statistical model "Maximun Entropy Method" (MaxEnt) was applied. A distinguishing feature of this method is that it can fit more complex models in case of small datasets (x and y data), as is the location of uranium ores in the State of Chihuahua. For georeferencing uranium ores, a database from the United States Geological Survey and workgroup of experts in Mexico was used. The main contribution of this paper is the proposal of maximum entropy techniques to obtain the mineral's potential distribution. For this model were used 24 environmental layers like topography, gravimetry, climate (worldclim), soil properties and others that were useful to project the uranium's distribution across the study area. For the validation of the places predicted by the model, comparisons were done with other research of the Mexican Service of Geological Survey, with direct exploration of specific areas and by talks with former exploration workers of the enterprise "Uranio de Mexico". Results. New uranium areas predicted by the model were validated, finding some relationship between the model predictions and geological faults. Conclusions. Modeling by spatial analysis provides additional information to the energy and mineral resources sectors.

  9. OBIC technique applied to wide bandgap semiconductors from 100 K up to 450 K

    Science.gov (United States)

    Hamad, H.; Planson, D.; Raynaud, C.; Bevilacqua, P.

    2017-05-01

    Wide bandgap semiconductors have recently become more frequently used in the power electronics domain. They are predicted to replace traditional silicon, especially for high voltage and/or high frequency devices. Device design has made a lot of progress in the last two decades. Substrates up to six inches in diameter have now been commercialized with very low defect densities. Such a development is due to continuous studies. Of these studies, those that allow an excess of charge carriers in the space charge region (like OBIC - optical beam induced current, and EBIC - electron beam induced current) are useful to analyze the variation of electric field as a function of the voltage and the beam position. This paper shows the OBIC technique applied to wide bandgap semiconductor-based devices. OBIC cartography gives an image of the electric field in the device, and the analysis of the OBIC signal helps one to determine some characteristics of the semiconductors, like minority carrier lifetime and ionization rates. These are key parameters to predict device switching behavior and breakdown voltage.

  10. Applying Toyota production system techniques for medication delivery: improving hospital safety and efficiency.

    Science.gov (United States)

    Newell, Terry L; Steinmetz-Malato, Laura L; Van Dyke, Deborah L

    2011-01-01

    The inpatient medication delivery system used at a large regional acute care hospital in the Midwest had become antiquated and inefficient. The existing 24-hr medication cart-fill exchange process with delivery to the patients' bedside did not always provide ordered medications to the nursing units when they were needed. In 2007 the principles of the Toyota Production System (TPS) were applied to the system. Project objectives were to improve medication safety and reduce the time needed for nurses to retrieve patient medications. A multidisciplinary team was formed that included representatives from nursing, pharmacy, informatics, quality, and various operational support departments. Team members were educated and trained in the tools and techniques of TPS, and then designed and implemented a new pull system benchmarking the TPS Ideal State model. The newly installed process, providing just-in-time medication availability, has measurably improved delivery processes as well as patient safety and satisfaction. Other positive outcomes have included improved nursing satisfaction, reduced nursing wait time for delivered medications, and improved efficiency in the pharmacy. After a successful pilot on two nursing units, the system is being extended to the rest of the hospital. © 2010 National Association for Healthcare Quality.

  11. Statistical Techniques Applied to Aerial Radiometric Surveys (STAARS): cluster analysis. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Pirkle, F.L.; Stablein, N.K.; Howell, J.A.; Wecksung, G.W.; Duran, B.S.

    1982-11-01

    One objective of the aerial radiometric surveys flown as part of the US Department of Energy's National Uranium Resource Evaluation (NURE) program was to ascertain the regional distribution of near-surface radioelement abundances. Some method for identifying groups of observations with similar radioelement values was therefore required. It is shown in this report that cluster analysis can identify such groups even when no a priori knowledge of the geology of an area exists. A method of convergent k-means cluster analysis coupled with a hierarchical cluster analysis is used to classify 6991 observations (three radiometric variables at each observation location) from the Precambrian rocks of the Copper Mountain, Wyoming, area. Another method, one that combines a principal components analysis with a convergent k-means analysis, is applied to the same data. These two methods are compared with a convergent k-means analysis that utilizes available geologic knowledge. All three methods identify four clusters. Three of the clusters represent background values for the Precambrian rocks of the area, and one represents outliers (anomalously high 214 Bi). A segmentation of the data corresponding to geologic reality as discovered by other methods has been achieved based solely on analysis of aerial radiometric data. The techniques employed are composites of classical clustering methods designed to handle the special problems presented by large data sets. 20 figures, 7 tables

  12. Formulation of Indomethacin Colon Targeted Delivery Systems Using Polysaccharides as Carriers by Applying Liquisolid Technique

    Directory of Open Access Journals (Sweden)

    Kadria A. Elkhodairy

    2014-01-01

    Full Text Available The present study aimed at the formulation of matrix tablets for colon-specific drug delivery (CSDD system of indomethacin (IDM by applying liquisolid (LS technique. A CSDD system based on time-dependent polymethacrylates and enzyme degradable polysaccharides was established. Eudragit RL 100 (E-RL 100 was employed as time-dependent polymer, whereas bacterial degradable polysaccharides were presented as LS systems loaded with the drug. Indomethacin-loaded LS systems were prepared using different polysaccharides, namely, guar gum (GG, pectin (PEC, and chitosan (CH, as carriers separately or in mixtures of different ratios of 1 : 3, 1 : 1, and 3 : 1. Liquisolid systems that displayed promising results concerning drug release rate in both pH 1.2 and pH 6.8 were compressed into tablets after the addition of the calculated amount of E-RL 100 and lubrication with magnesium stearate and talc in the ratio of 1 : 9. It was found that E-RL 100 improved the flowability and compressibility of all LS formulations. The release data revealed that all formulations succeeded to sustain drug release over a period of 24 hours. Stability study indicated that PEC-based LS system as well as its matrix tablets was stable over the period of storage (one year and could provide a minimum shelf life of two years.

  13. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    Science.gov (United States)

    Noordman, Janneke; van Lee, Inge; Nielen, Mark; Vlek, Hans; van Weijden, Trudy; van Dulmen, Sandra

    2012-12-01

    Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice nurses' application of motivational interviewing in real-life primary care consultations was examined. Furthermore, we explored if (and to what extent) practice nurses adjust their motivational interviewing skills to primary versus secondary prevention. Thirteen Dutch practice nurses, from four general practices, trained in motivational interviewing participated, 117 adult patients visiting the practice nurse participated, 117 practice nurse-patient consultations between June and December 2010 were videotaped. Motivational interview skills were rated by two observers using the Behaviour Change Counselling Index (BECCI). Data were analyzed using multilevel regression. Practice nurses use motivational interviewing techniques to some extent. Substantial variation was found between motivational interviewing items. No significant differences in the use of motivational interviewing between primary and secondary prevention was found. Motivational interviewing skills are not easily applicable in routine practice. Health care providers who want to acquire motivational interview skills should follow booster sessions after the first training. The training could be strengthened by video-feedback and feedback based on participating observation. A possible explanation for the lack of differences between the two types of prevention consultations may be the gain to help patients in primary consultations by preventing complications equals the necessity to help the disease from aggravating in secondary prevention.

  14. Advanced Signal Processing Techniques Applied to Terahertz Inspections on Aerospace Foams

    Science.gov (United States)

    Trinh, Long Buu

    2009-01-01

    The space shuttle's external fuel tank is thermally insulated by the closed cell foams. However, natural voids composed of air and trapped gas are found as by-products when the foams are cured. Detection of foam voids and foam de-bonding is a formidable task owing to the small index of refraction contrast between foam and air (1.04:1). In the presence of a denser binding matrix agent that bonds two different foam materials, time-differentiation of filtered terahertz signals can be employed to magnify information prior to the main substrate reflections. In the absence of a matrix binder, de-convolution of the filtered time differential terahertz signals is performed to reduce the masking effects of antenna ringing. The goal is simply to increase probability of void detection through image enhancement and to determine the depth of the void.

  15. State and parameter estimation based on a nonlinear filter applied to an industrial process control of ethanol production

    Directory of Open Access Journals (Sweden)

    Meleiro L.A.C.

    2000-01-01

    Full Text Available Most advanced computer-aided control applications rely on good dynamics process models. The performance of the control system depends on the accuracy of the model used. Typically, such models are developed by conducting off-line identification experiments on the process. These experiments for identification often result in input-output data with small output signal-to-noise ratio, and using these data results in inaccurate model parameter estimates [1]. In this work, a multivariable adaptive self-tuning controller (STC was developed for a biotechnological process application. Due to the difficulties involving the measurements or the excessive amount of variables normally found in industrial process, it is proposed to develop "soft-sensors" which are based fundamentally on artificial neural networks (ANN. A second approach proposed was set in hybrid models, results of the association of deterministic models (which incorporates the available prior knowledge about the process being modeled with artificial neural networks. In this case, kinetic parameters - which are very hard to be accurately determined in real time industrial plants operation - were obtained using ANN predictions. These methods are especially suitable for the identification of time-varying and nonlinear models. This advanced control strategy was applied to a fermentation process to produce ethyl alcohol (ethanol in industrial scale. The reaction rate considered for substratum consumption, cells and ethanol productions are validated with industrial data for typical operating conditions. The results obtained show that the proposed procedure in this work has a great potential for application.

  16. Comparison of advanced DSP techniques for spectrally efficient Nyquist-WDM signal generation using digital FIR filters at transmitters based on higher-order modulation formats

    Science.gov (United States)

    Weng, Yi; Wang, Junyi; Pan, Zhongqi

    2016-02-01

    To support the ever-increasing demand for high-speed optical communications, Nyquist spectral shaping serves as a promising technique to improve spectral efficiency (SE) by generating near-rectangular spectra with negligible crosstalk and inter-symbol interference in wavelength-division-multiplexed (WDM) systems. Compared with specially-designed optical methods, DSP-based electrical filters are more flexible as they can generate different filter shapes and modulation formats. However, such transmitter-side pre-filtering approach is sensitive to the limited taps of finite-impulse-response (FIR) filter, for the complexity of the required DSP and digital-to-analog converter (DAC) is limited by the cost and power consumption of optical transponder. In this paper, we investigate the performance and complexity of transmitter-side FIR-based DSP with polarization-division-multiplexing (PDM) high-order quadrature-amplitude-modulation (QAM) formats. Our results show that Nyquist 64-QAM, 16-QAM and QPSK WDM signals can be sufficiently generated by digital FIR filters with 57, 37, and 17 taps respectively. Then we explore the effects of the required spectral pre-emphasis, bandwidth and resolution on the performance of Nyquist-WDM systems. To obtain negligible OSNR penalty with a roll-off factor of 0.1, two-channel-interleaved DAC requires a Gaussian electrical filter with the bandwidth of 0.4-0.6 times of the symbol rate for PDM-64QAM, 0.35-0.65 times for PDM-16QAM, and 0.3-0.8 times for PDM-QPSK, with required DAC resolutions as 8, 7, 6 bits correspondingly. As a tradeoff, PDM-64QAM can be a promising candidate for SE improvement in next-generation optical metro networks.

  17. Investigation about the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils

    Directory of Open Access Journals (Sweden)

    Adriano Pinto Mariano

    2009-10-01

    Full Text Available This work investigated the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils collected at three service stations. Batch biodegradation experiments were carried out in Bartha biometer flasks (250 mL used to measure the microbial CO2 production. Biodegradation efficiency was also measured by quantifying the concentration of hydrocarbons. In addition to the biodegradation experiments, the capability of the studied cultures and the native microorganisms to biodegrade the diesel oil purchased from a local service station, was verified using a technique based on the redox indicator 2,6 -dichlorophenol indophenol (DCPIP. Results obtained with this test showed that the inocula used in the biodegradation experiments were able to degrade the diesel oil and the tests carried out with the native microorganisms indicated that these soils had a microbiota adapted to degrade the hydrocarbons. In general, no gain was obtained with the addition of microorganisms or even negative effects were observed in the biodegradation experiments.Este trabalho investigou a eficiência da técnica do bioaumento quando aplicada a solos contaminados com óleo diesel coletados em três postos de combustíveis. Experimentos de biodegradação foram realizados em frascos de Bartha (250 mL, usados para medir a produção microbiana de CO2. A eficiência de biodegradação também foi quantificada pela concentração de hidrocarbonetos. Conjuntamente aos experimentos de biodegradação, a capacidade das culturas estudadas e dos microrganismos nativos em biodegradar óleo diesel comprado de um posto de combustíveis local, foi verificada utilizando-se a técnica baseada no indicador redox 2,6 - diclorofenol indofenol (DCPIP. Resultados obtidos com esse teste mostraram que os inóculos empregados nos experimentos de biodegradação foram capazes de biodegradar óleo diesel e os testes com os microrganismos nativos indicaram que estes solos

  18. Influence of applied corneal endothelium image segmentation techniques on the clinical parameters.

    Science.gov (United States)

    Piorkowski, Adam; Nurzynska, Karolina; Gronkowska-Serafin, Jolanta; Selig, Bettina; Boldak, Cezary; Reska, Daniel

    2017-01-01

    The corneal endothelium state is verified on the basis of an in vivo specular microscope image from which the shape and density of cells are exploited for data description. Due to the relatively low image quality resulting from a high magnification of the living, non-stained tissue, both manual and automatic analysis of the data is a challenging task. Although, many automatic or semi-automatic solutions have already been introduced, all of them are prone to inaccuracy. This work presents a comparison of four methods (fully-automated or semi-automated) for endothelial cell segmentation, all of which represent a different approach to cell segmentation; fast robust stochastic watershed (FRSW), KH method, active contours solution (SNAKE), and TOPCON ImageNET. Moreover, an improvement framework is introduced which aims to unify precise cell border location in images pre-processed with differing techniques. Finally, the influence of the selected methods on clinical parameters is examined, both with and without the improvement framework application. The experiments revealed that although the image segmentation approaches differ, the measures calculated for clinical parameters are in high accordance when CV (coefficient of variation), and CVSL (coefficient of variation of cell sides length) are considered. Higher variation was noticed for the H (hexagonality) metric. Utilisation of the improvement framework assured better repeatability of precise endothelial cell border location between the methods while diminishing the dispersion of clinical parameter values calculated for such images. Finally, it was proven statistically that the image processing method applied for endothelial cell analysis does not influence the ability to differentiate between the images using medical parameters. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Guenter Tulip Filter Retrieval Experience: Predictors of Successful Retrieval

    International Nuclear Information System (INIS)

    Turba, Ulku Cenk; Arslan, Bulent; Meuse, Michael; Sabri, Saher; Macik, Barbara Gail; Hagspiel, Klaus D.; Matsumoto, Alan H.; Angle, John F.

    2010-01-01

    We report our experience with Guenter Tulip filter placement indications, retrievals, and procedural problems, with emphasis on alternative retrieval techniques. We have identified 92 consecutive patients in whom a Guenter Tulip filter was placed and filter removal attempted. We recorded patient demographic information, filter placement and retrieval indications, procedures, standard and nonstandard filter retrieval techniques, complications, and clinical outcomes. The mean time to retrieval for those who experienced filter strut penetration was statistically significant [F(1,90) = 8.55, p = 0.004]. Filter strut(s) IVC penetration and successful retrieval were found to be statistically significant (p = 0.043). The filter hook-IVC relationship correlated with successful retrieval. A modified guidewire loop technique was applied in 8 of 10 cases where the hook appeared to penetrate the IVC wall and could not be engaged with a loop snare catheter, providing additional technical success in 6 of 8 (75%). Therefore, the total filter retrieval success increased from 88 to 95%. In conclusion, the Guenter Tulip filter has high successful retrieval rates with low rates of complication. Additional maneuvers such as a guidewire loop method can be used to improve retrieval success rates when the filter hook is endothelialized.

  20. Shopping For Danger: E-commerce techniques applied to collaboration in cyber security

    Energy Technology Data Exchange (ETDEWEB)

    Bruce, Joseph R.; Fink, Glenn A.

    2012-05-24

    Collaboration among cyber security analysts is essential to a successful protection strategy on the Internet today, but it is uncommonly practiced or encouraged in operating environments. Barriers to productive collaboration often include data sensitivity, time and effort to communicate, institutional policy, and protection of domain knowledge. We propose an ambient collaboration framework, Vulcan, designed to remove the barriers of time and effort and mitigate the others. Vulcan automated data collection, collaborative filtering, and asynchronous dissemination, eliminating the effort implied by explicit collaboration among peers. We instrumented two analytic applications and performed a mock analysis session to build a dataset and test the output of the system.

  1. Trends in analytical techniques applied to particulate matter characterization: A critical review of fundaments and applications.

    Science.gov (United States)

    Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael

    2018-05-01

    Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Photothermal ablation with the excimer laser sheath technique for embedded inferior vena cava filter removal: initial results from a prospective study.

    Science.gov (United States)

    Kuo, William T; Odegaard, Justin I; Louie, John D; Sze, Daniel Y; Unver, Kamil; Kothary, Nishita; Rosenberg, Jarrett K; Hovsepian, David M; Hwang, Gloria L; Hofmann, Lawrence V

    2011-06-01

    To evaluate the safety and effectiveness of the excimer laser sheath technique for removing embedded inferior vena cava (IVC) filters. Over 12 months, 25 consecutive patients undergoing attempted IVC filter retrieval with a laser-assisted sheath technique were prospectively enrolled into an institutional review board-approved study registry. There were 10 men and 15 women (mean age 50 years, range 20-76 years); 18 (72%) of 25 patients were referred from an outside hospital. Indications for retrieval included symptomatic filter-related acute caval thrombosis (with or without acute pulmonary embolism), chronic IVC occlusion, and bowel penetration. Retrieval was also performed to remove risks from prolonged implantation and potentially to eliminate need for lifelong anticoagulation. After failure of standard methods, controlled photothermal ablation of filter-adherent tissue with a Spectranetics laser sheath and CVX-300 laser system was performed. All patients were evaluated with cavography, and specimens were sent for histologic analysis. Laser-assisted retrieval was successful in 24 (96%) of 25 patients as follows: 11 Günther Tulip (mean 375 days, range 127-882 days), 4 Celect (mean 387 days, range 332-440 days), 2 Option (mean 215 days, range 100-330 days), 4 OPTEASE (mean 387 days, range 71-749 days; 1 failed 188 days), 2 TRAPEASE (mean 871 days, range 187-1,555 days), and 2 Greenfield (mean 12.8 years, range 7.2-18.3 years). There was one (4%) major complication (acute thrombus, treated with thrombolysis), three (12%) minor complications (small extravasation, self-limited), and one adverse event (coagulopathic retroperitoneal hemorrhage) at follow-up (mean 126 days, range 13-302 days). Photothermal ablation of filter-adherent tissue was histologically confirmed in 23 (92%) of 25 patients. The laser-assisted sheath technique appears to be a safe and effective tool for retrieving embedded IVC filters, including permanent types, with implantation ranging from

  3. Machine Learning Techniques Applied to Profile Mobile Banking Users in India

    OpenAIRE

    M. Carr; V. Ravi; G. Sridharan Reddy; D. Veranna

    2013-01-01

    This paper profiles mobile banking users using machine learning techniques viz. Decision Tree, Logistic Regression, Multilayer Perceptron, and SVM to test a research model with fourteen independent variables and a dependent variable (adoption). A survey was conducted and the results were analysed using these techniques. Using Decision Trees the profile of the mobile banking adopter’s profile was identified. Comparing different machine learning techniques it was found that Decision Trees out...

  4. Applying data mining techniques to medical time series: an empirical case study in electroencephalography and stabilometry

    Directory of Open Access Journals (Sweden)

    A. Anguera

    2016-01-01

    This paper illustrates the application of different knowledge discovery techniques for the purposes of classification within the above domains. The accuracy of this application for the two classes considered in each case is 99.86% and 98.11% for epilepsy diagnosis in the electroencephalography (EEG domain and 99.4% and 99.1% for early-age sports talent classification in the stabilometry domain. The KDD techniques achieve better results than other traditional neural network-based classification techniques.

  5. Biomechanical study of the funnel technique applied in thoracic pedicle screw replacement.

    Science.gov (United States)

    Huang, Yi-Jiang; Peng, Mao-Xiu; He, Shao-Qi; Liu, Liang-Le; Dai, Ming-Hai; Tang, Chenxuan

    2014-09-01

    Funnel technique is a method used for the insertion of screw into thoracic pedicle. To evaluate the biomechanical characteristics of thoracic pedicle screw placement using the Funnel technique, trying to provide biomechanical basis for clinical application of this technology. 14 functional spinal units (T6 to T10) were selected from thoracic spine specimens of 14 fresh adult cadavers, and randomly divided into two groups, including Funnel technique group (n = 7) and Magerl technique group (n = 7). The displacement-stiffness and pull-out strength in all kinds of position were tested and compared. Two fixed groups were significantly higher than that of the intact state (P 0.05). The mean pull-out strength in Funnel technique group (789.09 ± 27.33) was lower than that in Magerl technique group (P Funnel technique for the insertion point of posterior bone is a safe and accurate technique for pedicle screw placement. It exhibited no effects on the stiffness of spinal column, but decreased the pull-out strength of pedicle screw. Therefore, the funnel technique in the thoracic spine affords an alternative for the standard screw placement.

  6. Element selective detection of molecular species applying chromatographic techniques and diode laser atomic absorption spectrometry.

    Science.gov (United States)

    Kunze, K; Zybin, A; Koch, J; Franzke, J; Miclea, M; Niemax, K

    2004-12-01

    Tunable diode laser atomic absorption spectroscopy (DLAAS) combined with separation techniques and atomization in plasmas and flames is presented as a powerful method for analysis of molecular species. The analytical figures of merit of the technique are demonstrated by the measurement of Cr(VI) and Mn compounds, as well as molecular species including halogen atoms, hydrogen, carbon and sulfur.

  7. Radiosurgery with flattening-filter-free techniques in the treatment of brain metastases. Plan comparison and early clinical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Rieber, J.; Tonndorf-Martini, E.; Schramm, O.; Rhein, B.; Stefanowicz, S.; Lindel, K.; Debus, J.; Rieken, S. [University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg (Germany); Heidelberg Institute of Radiation Oncology, Heidelberg (Germany); Kappes, J. [Heidelberg University, Translational Research Unit, Thoraxklinik, Heidelberg (Germany); Heidelberg University, Department of Pneumology, Thoraxklinik, Heidelberg (Germany); Member of the German Centre for Lung Research (DZL), Translational Lung Research Centre Heidelberg (TLRC-H), Heidelberg (Germany); Hoffmann, H. [Heidelberg University, Translational Research Unit, Thoraxklinik, Heidelberg (Germany); Heidelberg University, Department of Thoracic Surgery, Thoraxklinik, Heidelberg (Germany); Member of the German Centre for Lung Research (DZL), Translational Lung Research Centre Heidelberg (TLRC-H), Heidelberg (Germany)

    2016-11-15

    Radiosurgical treatment of brain metastases is well established in daily clinical routine. Utilization of flattening-filter-free beams (FFF) may allow for more rapid delivery of treatment doses and improve clinical comfort. Hence, we compared plan quality and efficiency of radiosurgery in FFF mode to FF techniques. Between November 2014 and June 2015, 21 consecutive patients with 25 brain metastases were treated with stereotactic radiosurgery (SRS) in FFF mode. Brain metastases received dose-fractionation schedules of 1 x 20 Gy or 1 x 18 Gy, delivered to the conformally enclosing 80 % isodose. Three patients with critically localized or large (>3 cm) brain metastases were treated with 6 x 5 Gy. Plan quality and efficiency were evaluated by analyzing conformity, dose gradients, dose to healthy brain tissue, treatment delivery time, and number of monitor units. FFF plans were compared to those using the FF method, and early clinical outcome and toxicity were assessed. FFF mode resulted in significant reductions in beam-on time (p < 0.001) and mean brain dose (p = 0.001) relative to FF-mode comparison plans. Furthermore, significant improvements in dose gradients and sharper dose falloffs were found for SRS in FFF mode (-1.1 %, -29.6 %; p ≤ 0.003), but conformity was slightly superior in SRS in FF mode (-1.3 %; p = 0.001). With a median follow-up time of 5.1 months, 6-month overall survival was 63.3 %. Local control was observed in 24 of 25 brain metastases (96 %). SRS in FFF mode is time efficient and provides similar plan quality with the opportunity of slightly reduced dose exposure to healthy brain tissue when compared to SRS in FF mode. Clinical outcomes appear promising and show only modest treatment-related toxicity. (orig.) [German] Die radiochirurgische Behandlung (SRS) von Hirnmetastasen wird vielfach in der klinischen Routine durchgefuehrt. Die zusaetzliche Anwendung von ausgleichsfilterfreien Bestrahlungstechniken (FFF) kann die Bestrahlungszeit

  8. Bias aware Kalman filters

    DEFF Research Database (Denmark)

    Drecourt, J.-P.; Madsen, H.; Rosbjerg, Dan

    2006-01-01

    This paper reviews two different approaches that have been proposed to tackle the problems of model bias with the Kalman filter: the use of a colored noise model and the implementation of a separate bias filter. Both filters are implemented with and without feedback of the bias into the model state...... are illustrated on a simple one-dimensional groundwater problem. The results show that the presented filters outperform the standard Kalman filter and that the implementations with bias feedback work in more general conditions than the implementations without feedback. 2005 Elsevier Ltd. All rights reserved........ The colored noise filter formulation is extended to correct both time correlated and uncorrelated model error components. A more stable version of the separate filter without feedback is presented. The filters are implemented in an ensemble framework using Latin hypercube sampling. The techniques...

  9. Técnicas moleculares aplicadas à microbiologia de alimentos = Molecular techniques applied to food microbiology

    Directory of Open Access Journals (Sweden)

    Eliezer Ávila Gandra

    2008-01-01

    Full Text Available A partir da década de 80, as técnicas moleculares começaram a ser utilizadas como uma alternativa aos métodos fenotípicos, tradicionalmente, utilizados em microbiologia de alimentos. Foi acelerada esta substituição com advento da descoberta da reação em cadeia da polimerase (polymerase chain reaction – PCR. Este artigo tem por objetivo revisar as principais técnicas moleculares utilizadas como ferramentas na microbiologia de alimentos, desde as, inicialmente, desenvolvidas, como a análise do perfil plasmidial, até as mais contemporâneas como o PCR em tempo real, discutindo as características, vantagens e desvantagens destas técnicas, avaliando a potencialidade destas para suprir as limitações das técnicas tradicionais.Beginning in the 1980s, molecular techniques became an alternative to the traditionally used phenotypic methods in food microbiology. With the advent of the polymerase chain reaction technique, this substitution was speed up. This article had as objective to review the main molecular techniques used as tools in food microbiology, from plasmidial profile analysis to contemporary techniques such as the real-time PCR. The characteristics, advantages anddisadvantages of these techniques are discussed, by evaluating the potential of these techniques to overcome the limitations of traditional techniques.

  10. Photothermal techniques applied to the study of thermal properties in biodegradable films

    Science.gov (United States)

    San Martín-Martínez, E.; Aguilar-Méndez, M. A.; Cruz-Orea, A.; García-Quiroz, A.

    2008-01-01

    The objective of the present work was to determine the thermal diffusivity and effusivity of biodegradable films by using photothermal techniques. The thermal diffusivity was studied by using the open photoacoustic cell technique. On the other hand the thermal effusivity was obtained by the photopyroelectric technique in a front detection configuration. The films were elaborated from mixtures of low density polyethylene (LDPE) and corn starch. The results showed that at high moisture values, the thermal diffusivity increased as the starch concentration was higher in the film. However at low moisture conditions (low extrusion moisture conditions (6.55%). As the moisture and starch concentration in the films were increased, the thermal effusivity diminished.

  11. Quantitative thoracic CT techniques in adults: can they be applied in the pediatric population?

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Soon Ho [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Goo, Jin Mo [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Jongno-gu, Seoul (Korea, Republic of); Goo, Hyun Woo [University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Asan Medical Center, Seoul (Korea, Republic of)

    2013-03-15

    With the rapid evolution of the multidetector row CT technique, quantitative CT has started to be used in clinical studies for revealing a heterogeneous entity of airflow limitation in chronic obstructive pulmonary disease that is caused by a combination of lung parenchymal destruction and remodeling of the small airways in adults. There is growing evidence of a good correlation between quantitative CT findings and pathological findings, pulmonary function test results and other clinical parameters. This article provides an overview of current quantitative thoracic CT techniques used in adults, and how to translate these CT techniques to the pediatric population. (orig.)

  12. Calorimetric techniques applied to the thermodynamic study of interactions between proteins and polysaccharides

    Directory of Open Access Journals (Sweden)

    Monique Barreto Santos

    2016-08-01

    Full Text Available ABSTRACT: The interactions between biological macromolecules have been important for biotechnology, but further understanding is needed to maximize the utility of these interactions. Calorimetric techniques provide information regarding these interactions through the thermal energy that is produced or consumed during interactions. Notable techniques include differential scanning calorimetry, which generates a thermodynamic profile from temperature scanning, and isothermal titration calorimetry that provide the thermodynamic parameters directly related to the interaction. This review described how calorimetric techniques can be used to study interactions between proteins and polysaccharides, and provided valuable insight into the thermodynamics of their interaction.

  13. Filtering observations without the initial guess

    Science.gov (United States)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the

  14. Applying of Reliability Techniques and Expert Systems in Management of Radioactive Accidents

    International Nuclear Information System (INIS)

    Aldaihan, S.; Alhbaib, A.; Alrushudi, S.; Karazaitri, C.

    1998-01-01

    Accidents including radioactive exposure have variety of nature and size. This makes such accidents complex situations to be handled by radiation protection agencies or any responsible authority. The situations becomes worse with introducing advanced technology with high complexity that provide operator huge information about system working on. This paper discusses the application of reliability techniques in radioactive risk management. Event tree technique from nuclear field is described as well as two other techniques from nonnuclear fields, Hazard and Operability and Quality Function Deployment. The objective is to show the importance and the applicability of these techniques in radiation risk management. Finally, Expert Systems in the field of accidents management are explored and classified upon their applications

  15. Applying lean techniques in the delivery of transportation infrastructure construction projects.

    Science.gov (United States)

    2011-07-01

    It is well documented that construction productivity has been declining since the 1960s. Additionally, studies have shown that only : 40% of construction workers time is considered to be value-added work. Interest in the use of Lean techniques ...

  16. Error analysis of the phase-shifting technique when applied to shadow moire

    International Nuclear Information System (INIS)

    Han, Changwoon; Han Bongtae

    2006-01-01

    An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique

  17. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture

    Directory of Open Access Journals (Sweden)

    Steren Chabert

    2017-01-01

    Full Text Available Cerebral aneurysm is a cerebrovascular disorder characterized by a bulging in a weak area in the wall of an artery that supplies blood to the brain. It is relevant to understand the mechanisms leading to the apparition of aneurysms, their growth and, more important, leading to their rupture. The purpose of this study is to study the impact on aneurysm rupture of the combination of different parameters, instead of focusing on only one factor at a time as is frequently found in the literature, using machine learning and feature extraction techniques. This discussion takes relevance in the context of the complex decision that the physicians have to take to decide which therapy to apply, as each intervention bares its own risks, and implies to use a complex ensemble of resources (human resources, OR, etc. in hospitals always under very high work load. This project has been raised in our actual working team, composed of interventional neuroradiologist, radiologic technologist, informatics engineers and biomedical engineers, from Valparaiso public Hospital, Hospital Carlos van Buren, and from Universidad de Valparaíso – Facultad de Ingeniería and Facultad de Medicina. This team has been working together in the last few years, and is now participating in the implementation of an “interdisciplinary platform for innovation in health”, as part of a bigger project leaded by Universidad de Valparaiso (PMI UVA1402. It is relevant to emphasize that this project is made feasible by the existence of this network between physicians and engineers, and by the existence of data already registered in an orderly manner, structured and recorded in digital format. The present proposal arises from the description in nowadays literature that the actual indicators, whether based on morphological description of the aneurysm, or based on characterization of biomechanical factor or others, these indicators were shown not to provide sufficient information in order

  18. Water spray cooling technique applied on a photovoltaic panel: The performance response

    International Nuclear Information System (INIS)

    Nižetić, S.; Čoko, D.; Yadav, A.; Grubišić-Čabo, F.

    2016-01-01

    Highlights: • An experimental study was conducted on a monocrystalline photovoltaic panel (PV). • A water spray cooling technique was implemented to determine PV panel response. • The experimental results showed favorable cooling effect on the panel performance. • A feasibility aspect of the water spray cooling technique was also proven. - Abstract: This paper presents an alternative cooling technique for photovoltaic (PV) panels that includes a water spray application over panel surfaces. An alternative cooling technique in the sense that both sides of the PV panel were cooled simultaneously, to investigate the total water spray cooling effect on the PV panel performance in circumstances of peak solar irradiation levels. A specific experimental setup was elaborated in detail and the developed cooling system for the PV panel was tested in a geographical location with a typical Mediterranean climate. The experimental result shows that it is possible to achieve a maximal total increase of 16.3% (effective 7.7%) in electric power output and a total increase of 14.1% (effective 5.9%) in PV panel electrical efficiency by using the proposed cooling technique in circumstances of peak solar irradiation. Furthermore, it was also possible to decrease panel temperature from an average 54 °C (non-cooled PV panel) to 24 °C in the case of simultaneous front and backside PV panel cooling. Economic feasibility was also determined for of the proposed water spray cooling technique, where the main advantage of the analyzed cooling technique is regarding the PV panel’s surface and its self-cleaning effect, which additionally acts as a booster to the average delivered electricity.

  19. Experimental demonstration of a DSP-based cross-channel interference cancellation technique for application in digital filter multiple access PONs.

    Science.gov (United States)

    Al-Rawachy, E; Giddings, R P; Tang, J M

    2017-02-20

    A DSP-based cross-channel interference cancellation (CCIC) technique with initial condition-free, fast convergence and signal modulation format independence, is experimentally demonstrated in a two-channel point-to-point digital filter multiple access (DFMA) PON system based on intensity-modulation and direct-detection (IMDD). The CCIC-induced transmission performance improvements under various system conditions are fully investigated for the first time. It is shown that with one iteration only the CCIC technique can achieve a reduction in individual OFDM subcarrier BERs of more than 1000 times, an increase in transmission capacity by as much as 19 times and an increase in optical power budget by as much as 3.5dB. The CCIC technique thus has the potential to drastically improve the transmission performance of DFMA PONs.

  20. Savitzky-Golay coupled with digital bandpass filtering as a pre-processing technique in the quantitative analysis of glucose from near infrared spectra.

    Science.gov (United States)

    Patchava, Krishna Chaitanya; Alrezj, Osamah; Benaissa, Mohammed; Behairy, Hatim

    2016-08-01

    This paper proposes a novel pre-processing method based on combining bandpass with Savitzky-Golay filtering to further improve the prediction performance of the linear calibration models Principal Component Regression (PCR) and Partial Least Squares Regression (PLSR) in near infrared spectroscopy. The proposed method is compared to the highly efficient RReliefF pre-processing technique for further evaluation. The developed calibration models have been validated to predict the glucose concentration from near infrared spectra of a mixture of glucose and human serum albumin in a phosphate buffer solution. The results show that the proposed technique improves the prediction performance of both the PCR and PLSR models and achieve better results than the RReliefF technique.

  1. Aspect-Oriented Programming using Composition Filters

    NARCIS (Netherlands)

    Aksit, Mehmet; Tekinerdogan, B.

    1998-01-01

    Software engineers may experience problems in modeling certain aspects while applying object-oriented techniques [4, 10, 11]. Composition-Filters are capable of expressing various different kinds of aspects in a uniform manner. These aspects are, for example, inheritance and delegation [1] and

  2. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  3. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression.

    Science.gov (United States)

    Heine, John J; Land, Walker H; Egan, Kathleen M

    2011-01-27

    When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL) techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR) modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  4. Synchrotron and Simulations Techniques Applied to Problems in Materials Science: Catalysts and Azul Maya Pigments

    International Nuclear Information System (INIS)

    Chianelli, R.

    2005-01-01

    Development of synchrotron techniques for the determination of the structure of disordered, amorphous and surface materials has exploded over the past twenty years due to the increasing availability of high flux synchrotron radiation and the continuing development of increasingly powerful synchrotron techniques. These techniques are available to materials scientists who are not necessarily synchrotron scientists through interaction with effective user communities that exist at synchrotrons such as the Stanford Synchrotron Radiation Laboratory (SSRL). In this article we review the application of multiple synchrotron characterization techniques to two classes of materials defined as ''surface compounds.'' One class of surface compounds are materials like MoS 2-x C x that are widely used petroleum catalysts used to improve the environmental properties of transportation fuels. These compounds may be viewed as ''sulfide supported carbides'' in their catalytically active states. The second class of ''surface compounds'' is the ''Maya Blue'' pigments that are based on technology created by the ancient Maya. These compounds are organic/inorganic ''surface complexes'' consisting of the dye indigo and palygorskite, a common clay. The identification of both surface compounds relies on the application of synchrotron techniques as described in this report

  5. Synchroton and Simulations Techniques Applied to Problems in Materials Science: Catalysts and Azul Maya Pigments

    Energy Technology Data Exchange (ETDEWEB)

    Chianelli, R.

    2005-01-12

    Development of synchrotron techniques for the determination of the structure of disordered, amorphous and surface materials has exploded over the past twenty years due to the increasing availability of high flux synchrotron radiation and the continuing development of increasingly powerful synchrotron techniques. These techniques are available to materials scientists who are not necessarily synchrotron scientists through interaction with effective user communities that exist at synchrotrons such as the Stanford Synchrotron Radiation Laboratory (SSRL). In this article we review the application of multiple synchrotron characterization techniques to two classes of materials defined as ''surface compounds.'' One class of surface compounds are materials like MoS{sub 2-x}C{sub x} that are widely used petroleum catalysts used to improve the environmental properties of transportation fuels. These compounds may be viewed as ''sulfide supported carbides'' in their catalytically active states. The second class of ''surface compounds'' is the ''Maya Blue'' pigments that are based on technology created by the ancient Maya. These compounds are organic/inorganic ''surface complexes'' consisting of the dye indigo and palygorskite, a common clay. The identification of both surface compounds relies on the application of synchrotron techniques as described in this report.

  6. Unsharp masking technique as a preprocessing filter for improvement of 3D-CT image of bony structure in the maxillofacial region

    International Nuclear Information System (INIS)

    Harada, Takuya; Nishikawa, Keiichi; Kuroyanagi, Kinya

    1998-01-01

    We evaluated the usefulness of the unsharp masking technique as a preprocessing filter to improve 3D-CT images of bony structure in the maxillofacial region. The effect of the unsharp masking technique with several combinations of mask size and weighting factor on image resolution was investigated using a spatial frequency phantom made of bone-equivalent material. The 3D-CT images were obtained with scans perpendicular to and parallel to the phantom plates. The contrast transfer function (CTF) and the full width at half maximum (FWHM) of each spatial frequency component were measured. The FWHM was expressed as a ratio against the actual thickness of phantom plate. The effect on pseudoforamina was assessed using sliced CT images obtained in clinical bony 3D-CT examinations. The effect of the unsharp masking technique on image quality was also visually evaluated using five clinical fracture cases. CTFs did not change. FWHM ratios of original 3D-CT images were smaller than 1.0, regardless of the scanning direction. Those in scans perpendicular to the phantom plates were not changed by the unsharp masking technique. Those in parallel scanning were increased by mask size and weighting factor. The area of pseudoforamina decreased with increases in mask size and weighting factor. The combination of mask size 3 x 3 pixels and weighting factor 5 was optimal. Visual evaluation indicated that preprocessing with the unsharp masking technique improved the image quality of the 3D-CT images. The unsharp masking technique is useful as a preprocessing filter to improve the 3D-CT image of bony structure in the maxillofacial region. (author)

  7. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, Jason; Gunter, Dan; Tierney, Brian; Allcock, Bill; Bester, Joe; Bresnahan, John; Tuecke, Steve

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference

  8. A Survey on Data Mining Techniques Applied to Electricity-Related Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Francisco Martínez-Álvarez

    2015-11-01

    Full Text Available Data mining has become an essential tool during the last decade to analyze large sets of data. The variety of techniques it includes and the successful results obtained in many application fields, make this family of approaches powerful and widely used. In particular, this work explores the application of these techniques to time series forecasting. Although classical statistical-based methods provides reasonably good results, the result of the application of data mining outperforms those of classical ones. Hence, this work faces two main challenges: (i to provide a compact mathematical formulation of the mainly used techniques; (ii to review the latest works of time series forecasting and, as case study, those related to electricity price and demand markets.

  9. Spherical harmonics based intrasubject 3-D kidney modeling/registration technique applied on partial information

    Science.gov (United States)

    Dillenseger, Jean-Louis; Guillaume, Hélène; Patard, Jean-Jacques

    2006-01-01

    This paper presents a 3D shape reconstruction/intra-patient rigid registration technique used to establish a Nephron-Sparing Surgery preoperative planning. The usual preoperative imaging system is the Spiral CT Urography, which provides successive 3D acquisitions of complementary information on kidney anatomy. Because the kidney is difficult to demarcate from the liver or from the spleen only limited information on its volume or surface is available. In our paper we propose a methodology allowing a global kidney spatial representation on a spherical harmonics basis. The spherical harmonics are exploited to recover the kidney 3D shape and also to perform intra-patient 3D rigid registration. An evaluation performed on synthetic data showed that this technique presented lower performance then expected for the 3D shape recovering but exhibited registration results slightly more accurate as the ICP technique with faster computation time. PMID:17073323

  10. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    Science.gov (United States)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  11. Comparison between conventional biofilters and biotrickling filters applied to waste bio-drying in terms of atmospheric dispersion and air quality.

    Science.gov (United States)

    Schiavon, Marco; Ragazzi, Marco; Torretta, Vincenzo; Rada, Elena Cristina

    2016-01-01

    Biofiltration has been widely applied to remove odours and volatile organic compounds (VOCs) from industrial off-gas and mechanical-biological waste treatments. However, conventional open biofilters cannot guarantee an efficient dispersion of air pollutants emitted into the atmosphere. The aim of this paper is to compare conventional open biofilters with biotrickling filters (BTFs) in terms of VOC dispersion in the atmosphere and air quality in the vicinity of a hypothetical municipal solid waste bio-drying plant. Simulations of dispersion were carried out regarding two VOCs of interest due to their impact in terms of odours and cancer risk: dimethyl disulphide and benzene, respectively. The use of BTFs, instead of conventional biofilters, led to significant improvements in the odour impact and the cancer risk: when adopting BTFs instead of an open biofilter, the area with an odour concentration > 1 OU m(-3) and a cancer risk > 10(-6) was reduced by 91.6% and 95.2%, respectively. When replacing the biofilter with BTFs, the annual mean concentrations of odorants and benzene decreased by more than 90% in the vicinity of the plant. These improvements are achieved above all because of the higher release height of BTFs and the higher velocity of the outgoing air flow.

  12. Effect of the reinforcement bar arrangement on the efficiency of electrochemical chloride removal technique applied to reinforced concrete structures

    International Nuclear Information System (INIS)

    Garces, P.; Sanchez de Rojas, M.J.; Climent, M.A.

    2006-01-01

    This paper reports on the research done to find out the effect that different bar arrangements may have on the efficiency of the electrochemical chloride removal (ECR) technique when applied to a reinforced concrete structural member. Five different types of bar arrangements were considered, corresponding to typical structural members such as columns (with single and double bar reinforcing), slabs, beams and footings. ECR was applied in several steps. We observe that the extraction efficiency depends on the reinforcing bar arrangement. A uniform layer set-up favours chloride extraction. Electrochemical techniques were also used to estimate the reinforcing bar corrosion states, as well as measure the corrosion potential, and instant corrosion rate based on the polarization resistance technique. After ECR treatment, a reduction in the corrosion levels is observed falling short of the depassivation threshold

  13. A Dual-Line Detection Rayleigh Scattering Diagnostic Technique for the Combustion of Hydrocarbon Fuels and Filtered UV Rayleigh Scattering for Gas Velocity Measurements

    Science.gov (United States)

    Otugen, M. Volkan

    1997-01-01

    Non-intrusive techniques for the dynamic measurement of gas flow properties such as density, temperature and velocity, are needed in the research leading to the development of new generation high-speed aircraft. Accurate velocity, temperature and density data obtained in ground testing and in-flight measurements can help understand the flow physics leading to transition and turbulence in supersonic, high-altitude flight. Such non-intrusive measurement techniques can also be used to study combustion processes of hydrocarbon fuels in aircraft engines. Reliable, time and space resolved temperature measurements in various combustor configurations can lead to a better understanding of high temperature chemical reaction dynamics thus leading to improved modeling and better prediction of such flows. In view of this, a research program was initiated at Polytechnic University's Aerodynamics Laboratory with support from NASA Lewis Research Center through grants NAG3-1301 and NAG3-1690. The overall objective of this program has been to develop laser-based, non-contact, space- and time-resolved temperature and velocity measurement techniques. In the initial phase of the program a ND:YAG laser-based dual-line Rayleigh scattering technique was developed and tested for the accurate measurement of gas temperature in the presence of background laser glare. Effort was next directed towards the development of a filtered, spectrally-resolved Rayleigh/Mie scattering technique with the objective of developing an interferometric method for time-frozen velocity measurements in high-speed flows utilizing the uv line of an ND:YAG laser and an appropriate molecular absorption filter. This effort included both a search for an appropriate filter material for the 266 nm laser line and the development and testing of several image processing techniques for the fast processing of Fabry-Perot images for velocity and temperature information. Finally, work was also carried out for the development of

  14. Applied predictive analytics principles and techniques for the professional data analyst

    CERN Document Server

    Abbott, Dean

    2014-01-01

    Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

  15. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, J.; Gunter, D.; Tierney, B.; Allcock, B.; Bester, J.; Bresnahan, J.; Tuecke, S.

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. From their work developing a scalable distributed network cache, the authors have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). The authors discuss several hardware and software design techniques, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. The authors describe results from the Supercomputing 2000 conference

  16. Monitoring gypsy moth defoliation by applying change detection techniques to Landsat imagery

    Science.gov (United States)

    Williams, D. L.; Stauffer, M. L.

    1978-01-01

    The overall objective of a research effort at NASA's Goddard Space Flight Center is to develop and evaluate digital image processing techniques that will facilitate the assessment of the intensity and spatial distribution of forest insect damage in Northeastern U.S. forests using remotely sensed data from Landsats 1, 2 and C. Automated change detection techniques are presently being investigated as a method of isolating the areas of change in the forest canopy resulting from pest outbreaks. In order to follow the change detection approach, Landsat scene correction and overlay capabilities are utilized to provide multispectral/multitemporal image files of 'defoliation' and 'nondefoliation' forest stand conditions.

  17. People Recognition for Loja ECU911 applying artificial vision techniques

    Directory of Open Access Journals (Sweden)

    Diego Cale

    2016-05-01

    Full Text Available This article presents a technological proposal based on artificial vision which aims to search people in an intelligent way by using IP video cameras. Currently, manual searching process is time and resource demanding in contrast to automated searching one, which means that it could be replaced. In order to obtain optimal results, three different techniques of artificial vision were analyzed (Eigenfaces, Fisherfaces, Local Binary Patterns Histograms. The selection process considered factors like lighting changes, image quality and changes in the angle of focus of the camera. Besides, a literature review was conducted to evaluate several points of view regarding artificial vision techniques.

  18. U P1, an example for advanced techniques applied to high level activity dismantling

    International Nuclear Information System (INIS)

    Michel-Noel, M.; Calixte, O.; Blanchard, S.; Bani, J.; Girones, P.; Moitrier, C.; Terry, G.; Bourdy, R.

    2014-01-01

    The U P1 plant on the CEA Marcoule site was dedicated to the processing of spend fuels from the G1, G2 and G3 plutonium-producing reactors. This plant represents 20.000 m 2 of workshops housing about 1000 hot cells. In 1998, a huge program for the dismantling and cleaning-up of the UP1 plant was launched. CEA has developed new techniques to face the complexity of the dismantling operations. These techniques include immersive virtual reality, laser cutting, a specific manipulator arm called MAESTRO and remote handling. (A.C.)

  19. A Technical Review of Electrochemical Techniques Applied to Microbiologically Influenced Corrosion

    Science.gov (United States)

    1991-01-01

    in the literature for the study of MIC phenomena. Videla65 has used this technique in a study of the action of Cladosporium resinae growth on the...ROSALES, Corrosion 44, 638 (1988). 65. H. A. VIDs, The action of Clado.sporiuo resinae growth on the electrochemical behavior of aluminum. Proc. bit. Conf

  20. Reduced order modelling techniques for mesh movement strategies as applied to fluid structure interactions

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2010-01-01

    Full Text Available In this paper, we implement the method of Proper Orthogonal Decomposition (POD) to generate a reduced order model (ROM) of an optimization based mesh movement technique. In the study it is shown that POD can be used effectively to generate a ROM...

  1. MSC/NASTRAN ''expert'' techniques developed and applied to the TFTR poloidal field coils

    International Nuclear Information System (INIS)

    O'Toole, J.A.

    1986-01-01

    The TFTR poloidal field (PF) coils are being analyzed by PPPL and Grumman using MSC/NASTRAN as a part of an overall effort to establish the absolute limiting conditions of operation for TFTR. Each of the PF coils will be analyzed in depth, using a detailed set of finite element models. Several of the models developed are quite large because each copper turn, as well as its surrounding insulation, was modeled using solid elements. Several of the finite element models proved large enough to tax the capabilities of the National Magnetic Fusion Energy Computer Center (NMFECC), specifically disk storage space. To allow the use of substructuring techniques with their associated data bases for the larger models, it became necessary to employ certain infrequently used MSC/NASTRAN ''expert'' techniques. The techniques developed used multiple data bases and data base sets to divide each problem into a series of computer runs. For each run, only the data required was kept on active disk space, the remainder being placed in inactive ''FILEM'' storage, thus, minimizing active disk space required at any time and permitting problem solution using the NMFECC. A representative problem using the TFTR OH-1 coil global model provides an example of the techniques developed. The special considerations necessary to obtain proper results are discussed

  2. Nuclear and conventional techniques applied to the analysis of Purhepecha metals of the Pareyon collection

    International Nuclear Information System (INIS)

    Mendez, U.; Tenorio C, D.; Ruvalcaba, J.L.; Lopez, J.A.

    2005-01-01

    The main objective of this investigation was to determine the composition and microstructure of 13 metallic devices by means of the nuclear techniques of PIXE, RBS and conventional; which were elaborated starting from copper and gold, and they were in the offering of a tarasc personage located in the 'Matamoros' porch in Uruapan, Michoacan, Mexico. (Author)

  3. Urban field guide: applying social forestry observation techniques to the east coast megalopolis

    Science.gov (United States)

    E. Svendsen; V. Marshall; M.F. Ufer

    2006-01-01

    A changing economy and different lifestyles have altered the meaning of the forest in the northeastern United States, prompting scientists to reconsider the spatial form, stewardship and function of the urban forest. The Authors describe how social observation techniques and the employment of a novel, locally based, participatory hand-held monitoring system could aid...

  4. Practising What We Teach: Vocational Teachers Learn to Research through Applying Action Learning Techniques

    Science.gov (United States)

    Lasky, Barbara; Tempone, Irene

    2004-01-01

    Action learning techniques are well suited to the teaching of organisation behaviour students because of their flexibility, inclusiveness, openness, and respect for individuals. They are no less useful as a tool for change for vocational teachers, learning, of necessity, to become researchers. Whereas traditional universities have always had a…

  5. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    NARCIS (Netherlands)

    Noordman, J.; Lee, I. van der; Nielen, M.; Vlek, H.; Weijden, T. van der; Dulmen, S. van

    2012-01-01

    Background: Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice

  6. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    NARCIS (Netherlands)

    Noordman, J.; van Lee, I.; Nielen, M.; Vlek, H.; van Weijden, T.; Dulmen, A.M. van

    2012-01-01

    BACKGROUND: Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice

  7. Comparison of applied dose and image quality in staging CT of neuroendocrine tumor patients using standard filtered back projection and adaptive statistical iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Böning, G., E-mail: georg.boening@charite.de [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Schäfer, M.; Grupp, U. [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Kaul, D. [Department of Radiation Oncology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Kahn, J. [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Pavel, M. [Department of Gastroenterology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Maurer, M.; Denecke, T.; Hamm, B.; Streitparth, F. [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany)

    2015-08-15

    Highlights: • Iterative reconstruction (IR) in staging CT provides equal objective image quality compared to filtered back projection (FBP). • IR delivers excellent subjective quality and reduces effective dose compared to FBP. • In patients with neuroendocrine tumor (NET) or may other hypervascular abdominal tumors IR can be used without scarifying diagnostic confidence. - Abstract: Objective: To investigate whether dose reduction via adaptive statistical iterative reconstruction (ASIR) affects image quality and diagnostic accuracy in neuroendocrine tumor (NET) staging. Methods: A total of 28 NET patients were enrolled in the study. Inclusion criteria were histologically proven NET and visible tumor in abdominal computed tomography (CT). In an intraindividual study design, the patients underwent a baseline CT (filtered back projection, FBP) and follow-up CT (ASIR 40%) using matched scan parameters. Image quality was assessed subjectively using a 5-grade scoring system and objectively by determining signal-to-noise ratio (SNR) and contrast-to-noise ratios (CNRs). Applied volume computed tomography dose index (CTDI{sub vol}) of each scan was taken from the dose report. Results: ASIR 40% significantly reduced CTDI{sub vol} (10.17 ± 3.06 mGy [FBP], 6.34 ± 2.25 mGy [ASIR] (p < 0.001) by 37.6% and significantly increased CNRs (complete tumor-to-liver, 2.76 ± 1.87 [FBP], 3.2 ± 2.32 [ASIR]) (p < 0.05) (complete tumor-to-muscle, 2.74 ± 2.67 [FBP], 4.31 ± 4.61 [ASIR]) (p < 0.05) compared to FBP. Subjective scoring revealed no significant changes for diagnostic confidence (5.0 ± 0 [FBP], 5.0 ± 0 [ASIR]), visibility of suspicious lesion (4.8 ± 0.5 [FBP], 4.8 ± 0.5 [ASIR]) and artifacts (5.0 ± 0 [FBP], 5.0 ± 0 [ASIR]). ASIR 40% significantly decreased scores for noise (4.3 ± 0.6 [FBP], 4.0 ± 0.8 [ASIR]) (p < 0.05), contrast (4.4 ± 0.6 [FBP], 4.1 ± 0.8 [ASIR]) (p < 0.001) and visibility of small structures (4.5 ± 0.7 [FBP], 4.3 ± 0.8 [ASIR]) (p < 0

  8. Cancellation of neutral current harmonics by using a four-branch star hybrid filter

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Rodriguez, Pedro; Candela, I.

    2008-01-01

    with two independent and simultaneous resonance frequencies, i.e., one for positive-/negative-sequence and another one for zero-sequence components. The FBS filter topology can work either as a passive filter, when only passive components are employed, or as a hybrid filter, when its performance......This paper presents a new technique for filtering current harmonics in three-phase four-wire networks based on the usage of a four-branch star (FBS) filter topology. Based on single-phase inductors and capacitors, the specific layout of the FBS filter topology allows achieving a power filter...... is improved by integrating a power converter into its structure. This paper analyzes the FBS topology and presents fundamental concepts regarding the control of a generic FBS hybrid power filter. A neutral current hybrid power filter and var compensator is presented as an illustrative example applying the FBS...

  9. A novel cooperative localization algorithm using enhanced particle filter technique in maritime search and rescue wireless sensor network.

    Science.gov (United States)

    Wu, Huafeng; Mei, Xiaojun; Chen, Xinqiang; Li, Junjun; Wang, Jun; Mohapatra, Prasant

    2017-09-29

    Maritime search and rescue (MSR) play a significant role in Safety of Life at Sea (SOLAS). However, it suffers from scenarios that the measurement information is inaccurate due to wave shadow effect when utilizing wireless Sensor Network (WSN) technology in MSR. In this paper, we develop a Novel Cooperative Localization Algorithm (NCLA) in MSR by using an enhanced particle filter method to reduce measurement errors on observation model caused by wave shadow effect. First, we take into account the mobility of nodes at sea to develop a motion model-Lagrangian model. Furthermore, we introduce both state model and observation model to constitute a system model for particle filter (PF). To address the impact of the wave shadow effect on the observation model, we develop an optimal parameter derived by Kullback-Leibler divergence (KLD) to mitigate the error. After the optimal parameter is acquired, an improved likelihood function is presented. Finally, the estimated position is acquired. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Sensing interrogation technique for fiber-optic interferometer type of sensors based on a single-passband RF filter.

    Science.gov (United States)

    Chen, Hao; Zhang, Shiwei; Fu, Hongyan; Zhou, Bin; Chen, Nan

    2016-02-08

    In this paper, a sensing interrogation system for fiber-optic interferometer type of sensors by using a single-passband radio-frequency (RF) filter has been proposed and experimentally demonstrated. The fiber-optic interferometer based sensors can give continuous optical sampling, and along with dispersive medium a single-passband RF frequency response can be achieved. The sensing parameter variation on the fiber-optic interferometer type of sensors will affect their free spectrum range, and thus the peak frequency of the RF filter. By tracking the central frequency of the passband the sensing parameter can be demodulated. As a demonstration, in our experiment a fiber Mach-Zehnder interferometer (FMZI) based temperature sensor has been interrogated. By tracking the peak frequency of the passband the temperature variation can be monitored. In our experiment, the sensing responsivity of 10.5 MHz/°C, 20.0 MHz/°C and 41.2 MHz/°C, when the lengths of sensing fiber are 1 m, 2 m and 4 m have been achieved.

  11. Parameters and definitions in applied technique quality test for nuclear magnetic resonance imaging system (NMRI)

    International Nuclear Information System (INIS)

    Lin Zhikai; Zhao Lancai

    1999-08-01

    During the past two decades, medical diagnostic imaging technique has achieved dramatic development such as CT, MRI, PET, DSA and so on. The most striking examples of them are the application of X ray computerized tomography (CT) and magnetic resonance imaging in the field of medical diagnosis. It can be predicted that magnetic resonance imaging (MRI) will definitely have more widespread prospects of applications and play more and more important role in clinical diagnosis looking forward to the development of image diagnostic technique for 21 st century. The authors also present the measuring methods for some parameters. The parameters described can be used for reference by clinical diagnosticians, operators on MRI and medical physicists who engages in image quality assurance (QA) and control (QC) in performing MRI acceptance test and routine test

  12. Magnetic resonance techniques applied to the diagnosis and treatment of Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Benito eDe Celis Alonso

    2015-07-01

    Full Text Available Parkinson’s disease affects at least 10 million people worldwide. It is a neurodegenerative disease which is currently diagnosed by neurological examination. No neuroimaging investigation or blood biomarker is available to aid diagnosis and prognosis. Most effort toward diagnosis using magnetic resonance has been focused on the use of structural/anatomical neuroimaging and diffusion tensor imaging. However, deep brain stimulation, a current strategy for treating Parkinson’s disease, is guided by magnetic resonance imaging. For clinical prognosis, diagnosis and follow-up investigations, blood oxygen level–dependent magnetic resonance imaging, diffusion tensor imaging, spectroscopy and transcranial magnetic stimulation have been used. These techniques represent the state of the art in the last five years. Here, we focus on magnetic resonance techniques for the diagnosis and treatment of Parkinson’s disease.

  13. The photoluminescence technique applied to the investigation of structural imperfections in quantum wells of semiconducting material

    Directory of Open Access Journals (Sweden)

    Eliermes Arraes Meneses

    2005-02-01

    Full Text Available Photoluminescence is one of the most used spectroscopy techniques for the study of the optical properties of semiconducting materials and heterostructures. In this work the potentiality of this technique is explored through the investigation and characterization of structural imperfections originated from fluctuations in the chemical composition of ternary and quaternary alloys, from interface roughnesses, and from unintentional compounds formed by the chemical elements intermixing at the interfaces. Samples of GaAs/AlGaAs, GaAsSb/GaAs, GaAsSbN/GaAs and GaAs/GaInP quantum well structures are analyzed to verify the influence of the structural imperfections on the PL spectra

  14. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    Energy Technology Data Exchange (ETDEWEB)

    Zarzalejo, L.F.; Ramirez, L.; Polo, J. [DER-CIEMAT, Madrid (Spain). Renewable Energy Dept.

    2005-07-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models. (author)

  15. 3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples

    Science.gov (United States)

    Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.

    2015-01-01

    In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible

  16. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    International Nuclear Information System (INIS)

    Zarzalejo, Luis F.; Ramirez, Lourdes; Polo, Jesus

    2005-01-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models

  17. The Ecological Profiles Technique applied to data from Lichtenburg, South Africa

    Directory of Open Access Journals (Sweden)

    J. W. Morris

    1974-12-01

    Full Text Available The method of ecological profiles and information shared between species and ecological variables, developed in France, is described for the first time in English. Preliminary results, using the technique on Bankenveld quadrat data from Lichtenburg, Western Transvaal, are given. It is concluded that the method has great potential value for the understanding of the autecology of South African species provided that the sampling method is appropriate.

  18. Improving throughput and user experience for information intensive websites by applying HTTP compression technique.

    Science.gov (United States)

    Malla, Ratnakar

    2008-11-06

    HTTP compression is a technique specified as part of the W3C HTTP 1.0 standard. It allows HTTP servers to take advantage of GZIP compression technology that is built into latest browsers. A brief survey of medical informatics websites show that compression is not enabled. With compression enabled, downloaded files sizes are reduced by more than 50% and typical transaction time is also reduced from 20 to 8 minutes, thus providing a better user experience.

  19. Acoustic Emission and Echo Signal Compensation Techniques Applied to an Ultrasonic Logging-While-Drilling Caliper.

    Science.gov (United States)

    Yao, Yongchao; Ju, Xiaodong; Lu, Junqiang; Men, Baiyong

    2017-06-10

    A logging-while-drilling (LWD) caliper is a tool used for the real-time measurement of a borehole diameter in oil drilling engineering. This study introduces the mechanical structure and working principle of a new LWD caliper based on ultrasonic distance measurement (UDM). The detection range is a major performance index of a UDM system. This index is determined by the blind zone length and remote reflecting interface detection capability of the system. To reduce the blind zone length and detect near the reflecting interface, a full bridge acoustic emission technique based on bootstrap gate driver (BGD) and metal-oxide-semiconductor field effect transistor (MOSFET) is designed by analyzing the working principle and impedance characteristics of a given piezoelectric transducer. To detect the remote reflecting interface and reduce the dynamic range of the received echo signals, the relationships between the echo amplitude and propagation distance of ultrasonic waves are determined. A signal compensation technique based on time-varying amplification theory, which can automatically change the gain according to the echo arrival time is designed. Lastly, the aforementioned techniques and corresponding circuits are experimentally verified. Results show that the blind zone length in the UDM system of the LWD caliper is significantly reduced and the capability to detect the remote reflecting interface is considerably improved.

  20. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  1. Metal oxide collectors for storing matter technique applied in secondary ion mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Miśnik, Maciej [Institute of Tele and Radio Technology, ul. Ratuszowa 11, 03-450 Warszawa (Poland); Gdańsk University of Technology (Poland); Konarski, Piotr [Institute of Tele and Radio Technology, ul. Ratuszowa 11, 03-450 Warszawa (Poland); Zawada, Aleksander [Institute of Tele and Radio Technology, ul. Ratuszowa 11, 03-450 Warszawa (Poland); Military University of Technology, Warszawa (Poland)

    2016-03-15

    We present results of the use of metal and metal oxide substrates that serve as collectors in ‘storing matter’, the quantitative technique of secondary ion mass spectrometry (SIMS). This technique allows separating the two base processes of secondary ion formation in SIMS. Namely, the process of ion sputtering is separated from the process of ionisation. The technique allows sputtering of the analysed sample and storing the sputtered material, with sub-monolayer coverage, onto a collector surface. Such deposits can be then analysed by SIMS, and as a result, the so called ‘matrix effects’ are significantly reduced. We perform deposition of the sputtered material onto Ti and Cu substrates and also onto metal oxide substrates as molybdenum, titanium, tin and indium oxides. The process of sputtering is carried within the same vacuum chamber where the SIMS analysis of the collected material is performed. For sputtering and SIMS analysis of the deposited material we use 5 keV Ar{sup +} beam of 500 nA. The presented results are obtained with the use of stationary collectors. Here we present a case study of chromium. The obtained results show that the molybdenum and titanium oxide substrates used as collectors increase useful yield by two orders, with respect to such pure elemental collectors as Cu and Ti. Here we define useful yield as a ratio of the number of detected secondary ions during SIMS analysis and the number of atoms sputtered during the deposition process.

  2. Applying the sterile insect technique to the control of insect pests

    International Nuclear Information System (INIS)

    LaChance, L.E.; Klassen, W.

    1991-01-01

    The sterile insect technique involves the mass-rearing of insects, which are sterilized by gamma rays from a 60 Co source before being released in a controlled fashion into nature. Matings between the sterile insects released and native insects produce no progeny, and so if enough of these matings occur the pest population can be controlled or even eradicated. A modification of the technique, especially suitable for the suppression of the moths and butterflies, is called the F, or inherited sterility method. In this, lower radiation doses are used such that the released males are only partially sterile (30-60%) and the females are fully sterile. When released males mate with native females some progeny are produced, but they are completely sterile. Thus, full expression of the sterility is delayed by one generation. This article describes the use of the sterile insect technique in controlling the screwworm fly, the tsetse fly, the medfly, the pink bollworm and the melon fly, and of the F 1 sterility method in the eradication of local gypsy moth infestations. 18 refs, 5 figs, 1 tab

  3. Applying Data-mining techniques to study drought periods in Spain

    Science.gov (United States)

    Belda, F.; Penades, M. C.

    2010-09-01

    Data-mining is a technique that it can be used to interact with large databases and to help in the discovery relations between parameters by extracting information from massive and multiple data archives. Drought affects many economic and social sectors, from agricultural to transportation, going through urban water deficit and the development of modern industries. With these problems and drought geographical and temporal distribution it's difficult to find a single definition of drought. Improving the understanding of the knowledge of climatic index is necessary to reduce the impacts of drought and to facilitate quick decisions regarding this problem. The main objective is to analyze drought periods from 1950 to 2009 in Spain. We use several kinds of information, different formats, sources and transmission mode. We use satellite-based Vegetation Index, dryness index for several temporal periods. We use daily and monthly precipitation and temperature data and soil moisture data from numerical weather model. We calculate mainly Standardized Precipitation Index (SPI) that it has been used amply in the bibliography. We use OLAP-Mining techniques to discovery of association rules between remote-sensing, numerical weather model and climatic index. Time series Data- Mining techniques organize data as a sequence of events, with each event having a time of recurrence, to cluster the data into groups of records or cluster with similar characteristics. Prior climatological classification is necessary if we want to study drought periods over all Spain.

  4. Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue

    OpenAIRE

    Marge, Matthew; Bonial, Claire; Byrne, Brendan; Cassidy, Taylor; Evans, A. William; Hill, Susan G.; Voss, Clare

    2017-01-01

    Our overall program objective is to provide more natural ways for soldiers to interact and communicate with robots, much like how soldiers communicate with other soldiers today. We describe how the Wizard-of-Oz (WOz) method can be applied to multimodal human-robot dialogue in a collaborative exploration task. While the WOz method can help design robot behaviors, traditional approaches place the burden of decisions on a single wizard. In this work, we consider two wizards to stand in for robot...

  5. Assessment of ground-based monitoring techniques applied to landslide investigations

    Science.gov (United States)

    Uhlemann, S.; Smith, A.; Chambers, J.; Dixon, N.; Dijkstra, T.; Haslam, E.; Meldrum, P.; Merritt, A.; Gunn, D.; Mackay, J.

    2016-01-01

    A landslide complex in the Whitby Mudstone Formation at Hollin Hill, North Yorkshire, UK is periodically re-activated in response to rainfall-induced pore-water pressure fluctuations. This paper compares long-term measurements (i.e., 2009-2014) obtained from a combination of monitoring techniques that have been employed together for the first time on an active landslide. The results highlight the relative performance of the different techniques, and can provide guidance for researchers and practitioners for selecting and installing appropriate monitoring techniques to assess unstable slopes. Particular attention is given to the spatial and temporal resolutions offered by the different approaches that include: Real Time Kinematic-GPS (RTK-GPS) monitoring of a ground surface marker array, conventional inclinometers, Shape Acceleration Arrays (SAA), tilt meters, active waveguides with Acoustic Emission (AE) monitoring, and piezometers. High spatial resolution information has allowed locating areas of stability and instability across a large slope. This has enabled identification of areas where further monitoring efforts should be focused. High temporal resolution information allowed the capture of 'S'-shaped slope displacement-time behaviour (i.e. phases of slope acceleration, deceleration and stability) in response to elevations in pore-water pressures. This study shows that a well-balanced suite of monitoring techniques that provides high temporal and spatial resolutions on both measurement and slope scale is necessary to fully understand failure and movement mechanisms of slopes. In the case of the Hollin Hill landslide it enabled detailed interpretation of the geomorphological processes governing landslide activity. It highlights the benefit of regularly surveying a network of GPS markers to determine areas for installation of movement monitoring techniques that offer higher resolution both temporally and spatially. The small sensitivity of tilt meter measurements

  6. Balanced microwave filters

    CERN Document Server

    Hong, Jiasheng; Medina, Francisco; Martiacuten, Ferran

    2018-01-01

    This book presents and discusses strategies for the design and implementation of common-mode suppressed balanced microwave filters, including, narrowband, wideband, and ultra-wideband filters This book examines differential-mode, or balanced, microwave filters by discussing several implementations of practical realizations of these passive components. Topics covered include selective mode suppression, designs based on distributed and semi-lumped approaches, multilayer technologies, defect ground structures, coupled resonators, metamaterials, interference techniques, and substrate integrated waveguides, among others. Divided into five parts, Balanced Microwave Filters begins with an introduction that presents the fundamentals of balanced lines, circuits, and networks. Part 2 covers balanced transmission lines with common-mode noise suppression, including several types of common-mode filters and the application of such filters to enhance common-mode suppression in balanced bandpass filters. Next, Part 3 exa...

  7. APPLICATION OF RANKING BASED ATTRIBUTE SELECTION FILTERS TO PERFORM AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION MODELS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-10-01

    Full Text Available In this paper, we study the performance of various models for automated evaluation of descriptive answers by using rank based feature selection filters for dimensionality reduction. We quantitatively analyze the best feature selection technique from amongst the five rank based feature selection techniques, namely Chi squared filter, Information gain filter, Gain ratio filter, Relief filter and Symmetrical uncertainty filter. We use Sequential Minimal Optimization with Polynomial kernel to build models and we evaluate the models across various parameters such as Accuracy, Time to build models, Kappa, Mean Absolute Error and Root Mean Squared Error. Except with Relief filter, for all other filters applied models, the accuracies obtained are at least 4% better than accuracies obtained with models with no filters applied. The accuracies recorded are same across Chi squared filter, Information gain filter, Gain ratio filter and Symmetrical Uncertainty filter. Therefore accuracy alone is not the determinant in selecting the best filter. The time taken to build models, Kappa, Mean absolute error and Root Mean Squared Error played a major role in determining the effectiveness of the filters. The overall rank aggregation metric of Symmetrical uncertainty filter is 45 and this is better by 1 rank than the rank aggregation metric of information gain attribute evaluation filter, the nearest contender to Symmetric attribute evaluation filter. Symmetric uncertainty rank aggregation metric is better by 3, 6, 112 ranks respectively when compared to rank aggregation metrics of Chi squared filter, Gain ratio filter and Relief filters. Through these quantitative measurements, we conclude that Symmetrical uncertainty attribute evaluation is the overall best performing rank based feature selection algorithm applicable for auto evaluation of descriptive answers.

  8. Analyzing Convergence in e-Learning Resource Filtering Based on ACO Techniques: A Case Study with Telecommunication Engineering Students

    Science.gov (United States)

    Munoz-Organero, Mario; Ramirez, Gustavo A.; Merino, Pedro Munoz; Kloos, Carlos Delgado

    2010-01-01

    The use of swarm intelligence techniques in e-learning scenarios provides a way to combine simple interactions of individual students to solve a more complex problem. After getting some data from the interactions of the first students with a central system, the use of these techniques converges to a solution that the rest of the students can…

  9. The differential dieaway technique applied to the measurement of the fissile content of drums of cement encapsulated waste

    International Nuclear Information System (INIS)

    Swinhoe, M.T.

    1986-01-01

    This report describes calculations of the differential dieaway technique as applied to cement encapsulated waste. The main difference from previous applications of the technique are that only one detector position is used (diametrically opposite the neutron source) and the chamber walls are made of concrete. The results show that by rotating the drum the response to fissile material across the central plane of the drum can be made relatively uniform. The absolute size of the response is about 0.4. counts per minute per gram fissile for a neutron source of 10 8 neutrons per second. Problems of neutron and gamma background and water content are considered. (author)

  10. Applied mathematics

    International Nuclear Information System (INIS)

    Nedelec, J.C.

    1988-01-01

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed [fr

  11. Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow

    Energy Technology Data Exchange (ETDEWEB)

    Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1996-12-31

    We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.

  12. Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

    International Nuclear Information System (INIS)

    Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

    2012-01-01

    Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.

  13. Applied Protein and Molecular Techniques for Characterization of B Cell Neoplasms in Horses

    Science.gov (United States)

    Badial, Peres R.; Tallmadge, Rebecca L.; Miller, Steven; Stokol, Tracy; Richards, Kristy; Borges, Alexandre S.

    2015-01-01

    Mature B cell neoplasms cover a spectrum of diseases involving lymphoid tissues (lymphoma) or blood (leukemia), with an overlap between these two presentations. Previous studies describing equine lymphoid neoplasias have not included analyses of clonality using molecular techniques. The objective of this study was to use molecular techniques to advance the classification of B cell lymphoproliferative diseases in five adult equine patients with a rare condition of monoclonal gammopathy, B cell leukemia, and concurrent lymphadenopathy (lymphoma/leukemia). The B cell neoplasms were phenotypically characterized by gene and cell surface molecule expression, secreted immunoglobulin (Ig) isotype concentrations, Ig heavy-chain variable (IGHV) region domain sequencing, and spectratyping. All five patients had hyperglobulinemia due to IgG1 or IgG4/7 monoclonal gammopathy. Peripheral blood leukocyte immunophenotyping revealed high proportions of IgG1- or IgG4/7-positive cells and relative T cell lymphopenia. Most leukemic cells lacked the surface B cell markers CD19 and CD21. IGHG1 or IGHG4/7 gene expression was consistent with surface protein expression, and secreted isotype and Ig spectratyping revealed one dominant monoclonal peak. The mRNA expression of the B cell-associated developmental genes EBF1, PAX5, and CD19 was high compared to that of the plasma cell-associated marker CD38. Sequence analysis of the IGHV domain of leukemic cells revealed mutated Igs. In conclusion, the protein and molecular techniques used in this study identified neoplastic cells compatible with a developmental transition between B cell and plasma cell stages, and they can be used for the classification of equine B cell lymphoproliferative disease. PMID:26311245

  14. X-ray Computed Microtomography technique applied for cementitious materials: A review.

    Science.gov (United States)

    da Silva, Ítalo Batista

    2018-04-01

    The main objective of this article is to present a bibliographical review about the use of the X-ray microtomography method in 3D images processing of cementitious materials microstructure, analyzing the pores microstructure and connectivity network, enabling tthe possibility of building a relationship between permeability and porosity. The use of this technique enables the understanding of physical, chemical and mechanical properties of cementitious materials by publishing good results, considering that the quality and quantity of accessible information were significant and may contribute to the study of cementitious materials development. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. A review of post-modern management techniques as currently applied to Turkish forestry.

    Science.gov (United States)

    Dölarslan, Emre Sahin

    2009-01-01

    This paper reviews the effects of six post-modern management concepts as applied to Turkish forestry. Up to now, Turkish forestry has been constrained, both in terms of its operations and internal organization, by a highly bureaucratic system. The application of new thinking in forestry management, however, has recently resulted in new organizational and production concepts that promise to address problems specific to this Turkish industry and bring about positive changes. This paper will elucidate these specific issues and demonstrate how post-modern management thinking is influencing the administration and operational capacity of Turkish forestry within its current structure.

  16. Applying stakeholder Delphi techniques for planning sustainable use of aquatic resources

    DEFF Research Database (Denmark)

    Lund, Søren; Banta, Gary Thomas; Bunting, Stuart W

    2015-01-01

    The HighARCS (Highland Aquatic Resources Conservation and Sustainable Development) project was a participatory research effort to map and better understand the patterns of resource use and livelihoods of communities who utilize highland aquatic resources in five sites across China, India...... and Vietnam. The purpose of this paper is to give an account of how the stakeholder Delphi method was adapted and applied to support the participatory integrated action planning for sustainable use of aquatic resources facilitated within the HighARCS project. An account of the steps taken and results recorded...

  17. Control System Design of Shunt Active Power Filter Based on Active Disturbance Rejection and Repetitive Control Techniques

    Directory of Open Access Journals (Sweden)

    Le Ge

    2014-01-01

    Full Text Available To rely on joint active disturbance rejection control (ADRC and repetitive control (RC, in this paper, a compound control law for active power filter (APF current control system is proposed. According to the theory of ADRC, the uncertainties in the model and from the circumstance outside are considered as the unknown disturbance to the system. The extended state observer can evaluate the unknown disturbance. Next, RC is introduced into current loop to improve the steady characteristics. The ADRC is used to get a good dynamic performance, and RC is used to get a good static performance. A good simulation result is got through choosing and changing the parameters, and the feasibility, adaptability, and robustness of the control are testified by this result.

  18. Microbiological evaluation of sludge during an improvement process applying the washing technique (selective pressure)

    International Nuclear Information System (INIS)

    Molina P, Francisco; Gonzalez, Maria Elena; Gonzalez, Luz Catalina

    2001-01-01

    In this investigation, the microbial consortiums were evaluated by using characterization by trophic groups and related groups by their sensibility to oxygen, as well as the specific methanogenic activity (SMA) of an acclimated sludge, starting from an aerobium sludge corning from a residual water treatment plant. Later, the technique of improvement by washing was applicated to this sludge, getting inoculum for the starting of an anaerobic reactor of the kind UASB (treatment reactor). At the same time, a control reactor was operated, inoculated with acclimated sludge. Both reactors were operated during 120 days, using brown sugar as substrate, the experimental phase included dates up to 70 operation days, characterizing the sludge at the end of this period. The SMA was analysed using acetic and formic acids as substrates. The results showed activities between 0,45 and 1,39 g DQO-CH 4 /SSV -d. for both substrates. At the end of the experimental phase of the UASB reactor, the sulphate reducer bacteria from the acetate and the lactate were observed as predominant group, followed by the methanogenic hydrogenophilic bacteria. It is important to notice that, with the application of the sludge washing technique, all the tropic groups were increased, with the exception of the lactate fermentative bacteria

  19. Gamma-radiography techniques applied to quality control of welds in water pipe lines

    International Nuclear Information System (INIS)

    Sanchez, W.; Oki, H.

    1974-01-01

    Non-destructive testing of welds may be done by the gamma-radiography technique, in order to detect the presence or absence of discontinuities and defects in the bulk of deposited metal and near the base metal. Gamma-radiography allows the documentation of the test with a complete inspection record, which is a fact not common in other non-destructive testing methods. In the quality control of longitudinal or transversal welds in water pipe lines, two exposition techniques are used: double wall and panoramic exposition. Three different water pipe lines systems have analysed for weld defects, giving a total of 16,000 gamma-radiographies. The tests were made according to the criteria established by the ASME standard. The principal metallic discontinuites found in the weld were: porosity (32%), lack of penetration (29%), lack of fusion (20%), and slag inclusion (19%). The percentage of gamma-radiographies showing welds without defects was 39% (6168 gamma-radiographies). On the other hand, 53% (8502 gamma-radiographies) showed the presence of acceptable discontinuities and 8% (1330 gamma-radiographies) were rejected according to the ASME standards [pt

  20. Fragrance composition of Dendrophylax lindenii (Orchidaceae using a novel technique applied in situ

    Directory of Open Access Journals (Sweden)

    James J. Sadler

    2012-02-01

    Full Text Available The ghost orchid, Dendrophylax lindenii (Lindley Bentham ex Rolfe (Orchidaceae, is one of North America’s rarest and well-known orchids. Native to Cuba and SW Florida where it frequents shaded swamps as an epiphyte, the species has experienced steady decline. Little information exists on D. lindenii’s biology in situ, raising conservation concerns. During the summer of 2009 at an undisclosed population in Collier County, FL, a substantial number (ca. 13 of plants initiated anthesis offering a unique opportunity to study this species in situ. We report a new technique aimed at capturing floral headspace of D. lindenii in situ, and identified volatile compounds using gas chromatography mass spectrometry (GC/MS. All components of the floral scent were identified as terpenoids with the exception of methyl salicylate. The most abundant compound was the sesquiterpene (E,E-α-farnesene (71% followed by (E-β-ocimene (9% and methyl salicylate (8%. Other compounds were: linalool (5%, sabinene (4%, (E-α-bergamotene (2%, α-pinene (1%, and 3-carene (1%. Interestingly, (E,E-α-farnesene has previously been associated with pestiferous insects (e.g., Hemiptera. The other compounds are common floral scent constituents in other angiosperms suggesting that our in situ technique was effective. Volatile capture was, therefore, possible without imposing physical harm (e.g., inflorescence detachment to this rare orchid.

  1. Applying machine learning techniques to the identification of late-onset hypogonadism in elderly men.

    Science.gov (United States)

    Lu, Ti; Hu, Ya-Han; Tsai, Chih-Fong; Liu, Shih-Ping; Chen, Pei-Ling

    2016-01-01

    In the diagnosis of late-onset hypogonadism (LOH), the Androgen Deficiency in the Aging Male (ADAM) questionnaire or Aging Males' Symptoms (AMS) scale can be used to assess related symptoms. Subsequently, blood tests are used to measure serum testosterone levels. However, results obtained using ADAM and AMS have revealed no significant correlations between ADAM and AMS scores and LOH, and the rate of misclassification is high. Recently, many studies have reported significant associations between clinical conditions such as the metabolic syndrome, obesity, lower urinary tract symptoms, and LOH. In this study, we sampled 772 clinical cases of men who completed both a health checkup and two questionnaires (ADAM and AMS). The data were obtained from the largest medical center in Taiwan. Two well-known classification techniques, the decision tree (DT) and logistic regression, were used to construct LOH prediction models on the basis of the aforementioned features. The results indicate that although the sensitivity of ADAM is the highest (0.878), it has the lowest specificity (0.099), which implies that ADAM overestimates LOH occurrence. In addition, DT combined with the AdaBoost technique (AdaBoost DT) has the second highest sensitivity (0.861) and specificity (0.842), resulting in having the best accuracy (0.851) among all classifiers. AdaBoost DT can provide robust predictions that will aid clinical decisions and can help medical staff in accurately assessing the possibilities of LOH occurrence.

  2. New tools, technology and techniques applied in geological sciences: current situation and future perspectives

    International Nuclear Information System (INIS)

    Ulloa, Andres

    2014-01-01

    Technological tools and work methodologies most used in the area of geological sciences are reviewed and described. The various electronic devices such as laptops, palmtops or PDA (personal digital assistant), tablets and smartphones have allowed to take field geological data and store them efficiently. Tablets and smartphones have been convenient for data collection of scientific data by the diversity of sensors that present, portability, autonomy and the possibility to install specific applications. High precision GPS in conjunction with LIDAR technology and sonar technology have been more accessible and used for geological research, generating high resolution three-dimensional models to complement geological studies. Remote sensing techniques such as high penetration radar are used to perform models of the ice thickness and topography in Antarctic. Modern three-dimensional scanning and printing techniques are used in geological science research and teaching. Currently, the advance in the computer technology has allowed to handle three-dimensional models on personal computers efficiently way and with different display options. Some, of the new areas of geology, emerged recently, are mentioned to generate a broad panorama toward where can direct geological researches in the next years [es

  3. The modification of generalized uncertainty principle applied in the detection technique of femtosecond laser

    Science.gov (United States)

    Li, Ziyi

    2017-12-01

    Generalized uncertainty principle (GUP), also known as the generalized uncertainty relationship, is the modified form of the classical Heisenberg’s Uncertainty Principle in special cases. When we apply quantum gravity theories such as the string theory, the theoretical results suggested that there should be a “minimum length of observation”, which is about the size of the Planck-scale (10-35m). Taking into account the basic scale of existence, we need to fix a new common form of Heisenberg’s uncertainty principle in the thermodynamic system and make effective corrections to statistical physical questions concerning about the quantum density of states. Especially for the condition at high temperature and high energy levels, generalized uncertainty calculations have a disruptive impact on classical statistical physical theories but the present theory of Femtosecond laser is still established on the classical Heisenberg’s Uncertainty Principle. In order to improve the detective accuracy and temporal resolution of the Femtosecond laser, we applied the modified form of generalized uncertainty principle to the wavelength, energy and pulse time of Femtosecond laser in our work. And we designed three typical systems from micro to macro size to estimate the feasibility of our theoretical model and method, respectively in the chemical solution condition, crystal lattice condition and nuclear fission reactor condition.

  4. Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet

    Science.gov (United States)

    Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark

    2017-11-01

    Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.

  5. A Systematic Approach to Applying Lean Techniques to Optimize an Office Process at the Y-12 National Security Complex

    Energy Technology Data Exchange (ETDEWEB)

    Credille, Jennifer [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States); Owens, Elizabeth [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States)

    2017-10-11

    This capstone offers the introduction of Lean concepts to an office activity to demonstrate the versatility of Lean. Traditionally Lean has been associated with process improvements as applied to an industrial atmosphere. However, this paper will demonstrate that implementing Lean concepts within an office activity can result in significant process improvements. Lean first emerged with the conception of the Toyota Production System. This innovative concept was designed to improve productivity in the automotive industry by eliminating waste and variation. Lean has also been applied to office environments, however the limited literature reveals most Lean techniques within an office are restricted to one or two techniques. Our capstone confronts these restrictions by introducing a systematic approach that utilizes multiple Lean concepts. The approach incorporates: system analysis, system reliability, system requirements, and system feasibility. The methodical Lean outline provides tools for a successful outcome, which ensures the process is thoroughly dissected and can be achieved for any process in any work environment.

  6. Mass Movement Hazards in the Mediterranean; A review on applied techniques and methodologies

    Science.gov (United States)

    Ziade, R.; Abdallah, C.; Baghdadi, N.

    2012-04-01

    Emergent population and expansions of settlements and life-lines over hazardous areas in the Mediterranean region have largely increased the impact of Mass Movements (MM) both in industrialized and developing countries. This trend is expected to continue in the next decades due to increased urbanization and development, continued deforestation and increased regional precipitation in MM-prone areas due to changing climatic patterns. Consequently, and over the past few years, monitoring of MM has acquired great importance from the scientific community as well as the civilian one. This article begins with a discussion of the MM classification, and the different topographic, geologic, hydrologic and environmental impacting factors. The intrinsic (preconditioning) variables determine the susceptibility of MM and extrinsic factors (triggering) can induce the probability of MM occurrence. The evolution of slope instability studies is charted from geodetic or observational techniques, to geotechnical field-based origins to recent higher levels of data acquisition through Remote Sensing (RS) and Geographic Information System (GIS) techniques. Since MM detection and zoning is difficult in remote areas, RS and GIS have enabled regional studies to predominate over site-based ones where they provide multi-temporal images hence facilitate greatly MM monitoring. The unusual extent of the spectrum of MM makes it difficult to define a single methodology to establish MM hazard. Since the probability of occurrence of MM is one of the key components in making rational decisions for management of MM risk, scientists and engineers have developed physical parameters, equations and environmental process models that can be used as assessment tools for management, education, planning and legislative purposes. Assessment of MM is attained through various modeling approaches mainly divided into three main sections: quantitative/Heuristic (1:2.000-1:10.000), semi-quantitative/Statistical (1

  7. A comparison of new, old and future densiometic techniques as applied to volcanologic study.

    Science.gov (United States)

    Pankhurst, Matthew; Moreland, William; Dobson, Kate; Þórðarson, Þorvaldur; Fitton, Godfrey; Lee, Peter

    2015-04-01

    The density of any material imposes a primary control upon its potential or actual physical behaviour in relation to its surrounds. It follows that a thorough understanding of the physical behaviour of dynamic, multi-component systems, such as active volcanoes, requires knowledge of the density of each component. If we are to accurately predict the physical behaviour of synthesized or natural volcanic systems, quantitative densiometric measurements are vital. The theoretical density of melt, crystals and bubble phases may be calculated using composition, structure, temperature and pressure inputs. However, measuring the density of natural, non-ideal, poly-phase materials remains problematic, especially if phase specific measurement is important. Here we compare three methods; Archimedes principle, He-displacement pycnometry and X-ray micro computed tomography (XMT) and discuss the utility and drawbacks of each in the context of modern volcanologic study. We have measured tephra, ash and lava from the 934 AD Eldgjá eruption (Iceland), and the 2010 AD Eyjafjallajökull eruption (Iceland), using each technique. These samples exhibit a range of particle sizes, phases and textures. We find that while the Archimedes method remains a useful, low-cost technique to generate whole-rock density data, relative precision is problematic at small particles sizes. Pycnometry offers a more precise whole-rock density value, at a comparable cost-per-sample. However, this technique is based upon the assumption pore spaces within the sample are equally available for gas exchange, which may or may not be the case. XMT produces 3D images, at resolutions from nm to tens of µm per voxel where X-ray attenuation is a qualitative measure of relative electron density, expressed as greyscale number/brightness (usually 16-bit). Phases and individual particles can be digitally segmented according to their greyscale and other characteristics. This represents a distinct advantage over both

  8. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    Directory of Open Access Journals (Sweden)

    J. L. Guardado

    2014-01-01

    Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  9. An encoding technique for multiobjective evolutionary algorithms applied to power distribution system reconfiguration.

    Science.gov (United States)

    Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  10. Complementary analysis techniques applied on optimizing suspensions of yttria stabilized zirconia

    DEFF Research Database (Denmark)

    Della Negra, Michela; Foghmoes, Søren Preben Vagn; Klemensø, Trine

    2016-01-01

    Three different polymers with different functional groups and similar molecular weight were tested as dispersing agents for suspensions of yttria stabilized zirconia in ethanol: polyvinyl pyrrolidone, polyethylene imine, polyvinyl butyral/acetal. The stability of the system was assessed consideri...... excellent performance of polyvinyl pyrrolidone and polyethylene imine as dispersing agents. The stability and dispersing power were finally utilized for preparing concentrated suspensions for tape casting and subsequently to sinter the tapes into dense ceramic pieces......., in details, all the processing steps, including suspension de-agglomeration, slurry manipulation, quality of sintered tapes microstructure, and final layer leak tightness. Different analytical techniques were used to monitor ceramic de-agglomeration and stability as a function of time, for different types...

  11. Automatic diameter control system applied to the laser heated pedestal growth technique

    Directory of Open Access Journals (Sweden)

    Andreeta M.R.B.

    2003-01-01

    Full Text Available We described an automatic diameter control system (ADC, for the laser heated pedestal growth technique, that reduces the diameter fluctuations in oxide fibers grown from unreacted and non-sinterized pedestals, to less than 2% of the average fiber diameter, and diminishes the average diameter fluctuation, over the entire length of the fiber, to less than 1%. The ADC apparatus is based on an artificial vision system that controls the pulling speed and the height of the molten zone within a precision of 30 mum. We also show that this system can be used for periodic in situ axial doping the fiber. Pure and Cr3+ doped LaAlO3 and pure LiNbO3 were usedas model materials.

  12. Synchrotron radiation X-ray powder diffraction techniques applied in hydrogen storage materials - A review

    Directory of Open Access Journals (Sweden)

    Honghui Cheng

    2017-02-01

    Full Text Available Synchrotron radiation is an advanced collimated light source with high intensity. It has particular advantages in structural characterization of materials on the atomic or molecular scale. Synchrotron radiation X-ray powder diffraction (SR-XRPD has been successfully exploited to various areas of hydrogen storage materials. In the paper, we will give a brief introduction on hydrogen storage materials, X-ray powder diffraction (XRPD, and synchrotron radiation light source. The applications of ex situ and in situ time-resolved SR-XRPD in hydrogen storage materials, are reviewed in detail. Future trends and proposals in the applications of the advanced XRPD techniques in hydrogen storage materials are also discussed.

  13. Applying machine learning techniques for forecasting flexibility of virtual power plants

    DEFF Research Database (Denmark)

    MacDougall, Pamela; Kosek, Anna Magdalena; Bindner, Henrik W.

    2016-01-01

    Previous and existing evaluations of available flexibility using small device demand response have typically been done with detailed information of end-user systems. With these large numbers, having lower level information has both privacy and computational limitations. We propose a black box...... hidden layer artificial neural network (ANN). Both techniques are used to model a relationship between the aggregator portfolio state and requested ramp power to the longevity of the delivered flexibility. Using validated individual household models, a smart controlled aggregated virtual power plant...... is simulated. A hierarchical market-based supply-demand matching control mechanism is used to steer the heating devices in the virtual power plant. For both the training and validation set of clusters, a random number of households, between 200 and 2000, is generated with day ahead profile scaled accordingly...

  14. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    Oliveira, Mauro V.; Schirru, Roberto

    2000-01-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  15. Polymer Aging Techniques Applied to Degradation of a Polyurethane Propellant Binder

    Energy Technology Data Exchange (ETDEWEB)

    Assink, R.A.; Celina, M.; Graham, A.C.; Minier, L.M.

    1999-07-27

    The oxidative thermal aging of a crosslinked hydroxy-terminated polybutadiene (HTPB)/isophorone diisocyanate (IPDI) polyurethane rubber, commonly used as the polymeric binder matrix in solid rocket propellants, was studied at temperatures of RT to 125 C. We investigate changes in tensile elongation, mechanical hardening, polymer network properties, density, O{sub 2} permeation and molecular chain dynamics using a range of techniques including solvent swelling, detailed modulus profiling and NMR relaxation measurements. Using extensive data superposition and highly sensitive oxygen consumption measurements, we critically evaluate the Arrhenius methodology, which normally assumes a linear extrapolation of high temperature aging data. Significant curvature in the Arrhenius diagram of these oxidation rates was observed similar to previous results found for other rubber materials. Preliminary gel/network properties suggest that crosslinking is the dominant process at higher temperatures. We also assess the importance of other constituents such as ammonium perchlorate or aluminum powder in the propellant formulation.

  16. Hysteresis compensation technique applied to polymer optical fiber curvature sensor for lower limb exoskeletons

    Science.gov (United States)

    Gomes Leal-Junior, Arnaldo; Frizera-Neto, Anselmo; José Pontes, Maria; Rodrigues Botelho, Thomaz

    2017-12-01

    Polymer optical fiber (POF) curvature sensors present some advantages over conventional techniques for angle measurements, such as their light weight, compactness and immunity to electromagnetic fields. However, high hysteresis can occur in POF curvature sensors due to the polymer viscoelastic response. In order to overcome this limitation, this paper shows how the hysteresis sensor can be compensated by a calibration equation relating the measured output signal to the sensor’s angular velocity. The proposed method is validated using an exoskeleton with an active joint on the knee for flexion and extension rehabilitation exercises. The results show a decrease in sensor hysteresis and a decrease by more than two times in the error between the POF sensor and the potentiometer, which is employed for the angle measurement of the exoskeleton knee joint.

  17. Emerging and Innovative Techniques for Arsenic Removal Applied to a Small Water Supply System

    Directory of Open Access Journals (Sweden)

    António J. Alçada

    2009-12-01

    Full Text Available The impact of arsenic on human health has led its drinking water MCL to be drastically reduced from 50 to 10 ppb. Consequently, arsenic levels in many water supply sources have become critical. This has resulted in technical and operational impacts on many drinking water treatment plants that have required onerous upgrading to meet the new standard. This becomes a very sensitive issue in the context of water scarcity and climate change, given the expected increasing demand on groundwater sources. This work presents a case study that describes the development of low-cost techniques for efficient arsenic control in drinking water. The results obtained at the Manteigas WTP (Portugal demonstrate the successful implementation of an effective and flexible process of reactive filtration using iron oxide. At real-scale, very high removal efficiencies of over 95% were obtained.

  18. Multiple criteria decision making techniques applied to electricity distribution system planning

    Energy Technology Data Exchange (ETDEWEB)

    Espie, P.; Ault, G.W.; Burt, G.M.; McDonald, J.R. [University of Strathclyde (United Kingdom). Inst. for Energy and the Environment

    2003-09-01

    An approach is described for electricity distribution system planning that allows issues such as load growth, distributed generation, asset management, quality of supply and environmental issues to be considered. In contrast to traditional optimisation approaches which typically assess alternative planning solution by finding the solution with the minimum total cost, the proposed methodology utilises a number of discrete evaluation criteria within a multiple criteria decision making (MCDM) environment to examine and assess the trade-offs between alternative solutions. To demonstrate the proposed methodology a worked example is performed on a test distribution network that forms part of an existing distribution network in one UK distribution company area. The results confirm the suitability of MCDM techniques to the distribution planning problem and highlight how evaluating all planning problems simultaneously can provide substantial benefits to a distribution company. (Author)

  19. Applying inversion techniques to derive source currents and geoelectric fields for geomagnetically induced current calculations

    Directory of Open Access Journals (Sweden)

    J. S. de Villiers

    2014-10-01

    Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.

  20. Applied anatomy of a new approach of endoscopic technique in thyroid gland surgery.

    Science.gov (United States)

    Liu, Hong; Xie, Yong-jun; Xu, Yi-quan; Li, Chao; Liu, Xing-guo

    2012-10-01

    To explore the feasibility and safety of transtracheal assisted sublingual approach to totally endoscopic thyroidectomy by studying the anatomical approach and adjacent structures. A total of 5 embalmed adult cadavers from Chengdu Medical College were dissected layer by layer in the cervical region, pharyngeal region, and mandible region, according to transtracheal assisted sublingual approach that was verified from the anatomical approach and planes. A total of 15 embalmed adult cadavers were dissected by arterial vascular casting technique, imaging scanning technique, and thin layer cryotomy. Then the vessel and anatomical structures of thyroid surgical region were analyzed qualitatively and quantitatively. Three-dimensional visualization of larynx artery was reconstructed by Autodesk 3ds Max 2010(32). Transtracheal assisted sublingual approach for totally endoscopic thyroidectomy was simulated on 5 embalmed adult cadavers. The sublingual observed access was located in the middle of sublingual region. The geniohyoid muscle, mylohyoid seam, and submental triangle were divided in turn in the middle to reach the plane under the plastima muscles. Superficial cervical fascia, anterior body of hyoid bone, and infrahyoid muscles were passed in sequence to reach thyroid gland surgical region. The transtracheal operational access was placed from the cavitas oris propria, isthmus faucium, subepiglottic region, laryngeal pharynx, and intermediate laryngeal cavit, and then passed from the top down in order to reach pars cervicalis tracheae where a sagittal incision was made in the anterior wall of cartilagines tracheales to reach a ascertained surgical region. Transtracheal assisted sublingual approach to totally endoscopic thyroidectomy is anatomically feasible and safe and can be useful in thyroid gland surgery.

  1. Applying the sterile insect technique to the control of insect pests

    International Nuclear Information System (INIS)

    LaChance, L.E.; Klassen, W.

    1991-01-01

    The sterile insect technique (SIT) is basically a novel twentieth century approach to insect birth control. It is species specific and exploits the mate seeking behaviour of the insect. The basic principle is simple. Insects are mass reared in 'factories' and sexually sterilized by gamma rays from a 60 Co source. The sterile insects are then released in a controlled fashion into nature. Matings between the sterile insects released and native insects produced no progeny. If enough of these matings take place, reproduction of the pest population decreases. With continued release, the pest population can be controlled and in some cases eradicated. In the light of the many important applications of the SIT worldwide and the great potential that SIT concepts hold for insect and pest control in developing countries, two special benefits should be stressed. Of greatest significance is the fact that the SIT permits suppression and eradication of insect pests in an environmentally harmless manner. It combines nuclear techniques with genetic approaches and, in effect, replaces intensive use of chemicals in pest control. Although chemicals are used sparingly at the outset in some SIT programmes to reduce the size of the pest population before releases of sterilized insects are started, the total amount of chemicals used in an SIT programme is a mere fraction of what would be used without the SIT. It is also of great importance that the SIT is not designed strictly for the eradication of pest species but can readily be used in the suppression of insect populations. In fact, the SIT is ideally suited for use in conjunction with other agricultural pest control practices such as the use of parasites and predators, attractants and cultural controls (e.g. ploughing under or destruction of crop residues) in integrated pest management programmes to achieve control at the lowest possible price and with a minimum of chemical contamination of the environment

  2. Study on structural design technique of silicon carbide applied for thermochemical hydrogen production IS process

    International Nuclear Information System (INIS)

    Takegami, Hiroaki; Terada, Atsuhiko; Inagaki, Yoshiyuki; Ishikura, Syuichi

    2011-03-01

    The IS process is the hydrogen production method which used the thermochemical reaction cycle of sulfuric acid and iodyne. Therefore, the design to endure the high temperature and moreover corrode-able environment is required to the equipment. Specifically, the sulfuric acid decomposer which is one of the main equipment of the IS process is the equipment to heat with hot helium and for the sulfuric acid of 90 wt% to evaporate. Moreover, it is the important equipment to supply the SO 3 decomposer which is the following process, resolving the part of sulfuric acid vapor into SO 3 with. The heat exchanger that sulfuric acid evaporates must be made pressure-resistant structure because it has the high-pressure helium of 4 MPa and the material that the high temperature and the corrosion environment of equal to or more than 700degC can be endured must be used. As the material, it is selected from the corrosion experiment and so on when SiC which is carbonization silicone ceramics is the most excellent material. However, even if it damages the ceramic block which is a heat exchanger because it becomes the structure which is stored in pressure-resistant metallic container, fluid such as sulfuric acid becomes the structure which doesn't leak out outside. However, the structure design technique to have been unified when using ceramics as the structure part isn't serviced as the standard. This report is the one which was studied about the structural design technique to have taken the material strength characteristic of the ceramics into consideration, refer to existing structural design standard. (author)

  3. Discrimination and classification techniques applied on Mallotus and Phyllanthus high performance liquid chromatography fingerprints.

    Science.gov (United States)

    Viaene, J; Goodarzi, M; Dejaegher, B; Tistaert, C; Hoang Le Tuan, A; Nguyen Hoai, N; Chau Van, M; Quetin-Leclercq, J; Vander Heyden, Y

    2015-06-02

    Mallotus and Phyllanthus genera, both containing several species commonly used as traditional medicines around the world, are the subjects of this discrimination and classification study. The objective of this study was to compare different discrimination and classification techniques to distinguish the two genera (Mallotus and Phyllanthus) on the one hand, and the six species (Mallotus apelta, Mallotus paniculatus, Phyllanthus emblica, Phyllanthus reticulatus, Phyllanthus urinaria L. and Phyllanthus amarus), on the other. Fingerprints of 36 samples from the 6 species were developed using reversed-phase high-performance liquid chromatography with ultraviolet detection (RP-HPLC-UV). After fingerprint data pretreatment, first an exploratory data analysis was performed using Principal Component Analysis (PCA), revealing two outlying samples, which were excluded from the calibration set used to develop the discrimination and classification models. Models were built by means of Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Classification and Regression Trees (CART) and Soft Independent Modeling of Class Analogy (SIMCA). Application of the models on the total data set (outliers included) confirmed a possible labeling issue for the outliers. LDA, QDA and CART, independently of the pretreatment, or SIMCA after "normalization and column centering (N_CC)" or after "Standard Normal Variate transformation and column centering (SNV_CC)" were found best to discriminate the two genera, while LDA after column centering (CC), N_CC or SNV_CC; QDA after SNV_CC; and SIMCA after N_CC or after SNV_CC best distinguished between the 6 species. As classification technique, SIMCA after N_CC or after SNV_CC results in the best overall sensitivity and specificity. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Flux-corrected transport techniques applied to the radiation transport equation discretized with continuous finite elements

    Science.gov (United States)

    Hansel, Joshua E.; Ragusa, Jean C.

    2018-02-01

    The Flux-Corrected Transport (FCT) algorithm is applied to the unsteady and steady-state particle transport equation. The proposed FCT method employs the following: (1) a low-order, positivity-preserving scheme, based on the application of M-matrix properties, (2) a high-order scheme based on the entropy viscosity method introduced by Guermond [1], and (3) local, discrete solution bounds derived from the integral transport equation. The resulting scheme is second-order accurate in space, enforces an entropy inequality, mitigates the formation of spurious oscillations, and guarantees the absence of negativities. Space discretization is achieved using continuous finite elements. Time discretizations for unsteady problems include theta schemes such as explicit and implicit Euler, and strong-stability preserving Runge-Kutta (SSPRK) methods. The developed FCT scheme is shown to be robust with explicit time discretizations but may require damping in the nonlinear iterations for steady-state and implicit time discretizations.

  5. Possibility of choosing development investment programs of a production company by applying discounted investment appraisal technique

    Directory of Open Access Journals (Sweden)

    Vesić-Vasović Jasmina

    2014-01-01

    Full Text Available The selection of development investment programs is one of the most important decisions in industrial production. The paper sets out the possibilities of applying dynamic criteria for investment decision making. It presents a practical numerical example for the value calculation of investment criteria Net Present Value and Internal Rate of Return for the reviewed investment project solutions. In this manner it is possible to make an orderly set of alternatives with clear preferences for the most suitable alternative in comparison with other ones. Such rating of project solutions will enable the decision maker to emphasize advantages with more arguments and select the most suitable project solution in accordance with the established criteria, conditions and limitations.

  6. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  7. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

    DEFF Research Database (Denmark)

    Nicoletti, Rodrigo; Santos, Ilmar

    2003-01-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication....... For further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...... to a tilting-pad journal bearing, are analysed and discussed. Important conclusions about the application of integral controllers, responsible for changing the rotor-bearing equilibrium position and consequently the "passive" oil film damping coefficients, are achieved. Numerical results show an effective...

  8. Discrete classification technique applied to TV advertisements liking recognition system based on low-cost EEG headsets.

    Science.gov (United States)

    Soria Morillo, Luis M; Alvarez-Garcia, Juan A; Gonzalez-Abril, Luis; Ortega Ramírez, Juan A

    2016-07-15

    In this paper a new approach is applied to the area of marketing research. The aim of this paper is to recognize how brain activity responds during the visualization of short video advertisements using discrete classification techniques. By means of low cost electroencephalography devices (EEG), the activation level of some brain regions have been studied while the ads are shown to users. We may wonder about how useful is the use of neuroscience knowledge in marketing, or what could provide neuroscience to marketing sector, or why this approach can improve the accuracy and the final user acceptance compared to other works. By using discrete techniques over EEG frequency bands of a generated dataset, C4.5, ANN and the new recognition system based on Ameva, a discretization algorithm, is applied to obtain the score given by subjects to each TV ad. The proposed technique allows to reach more than 75 % of accuracy, which is an excellent result taking into account the typology of EEG sensors used in this work. Furthermore, the time consumption of the algorithm proposed is reduced up to 30 % compared to other techniques presented in this paper. This bring about a battery lifetime improvement on the devices where the algorithm is running, extending the experience in the ubiquitous context where the new approach has been tested.

  9. Laser granulometry: A comparative study the techniques of sieving and elutriation applied to pozzoianic materials

    Directory of Open Access Journals (Sweden)

    Frías, M.

    1990-03-01

    Full Text Available Laser granulometry is a rapid method for determination of particle size distribution in both dry and wet phases. The present paper, diffraction technique by laser beams is an application to the granulometric studies of pozzolanic materials in suspension. Theses granulometric analysis are compared to those obtained with the Alpine pneumatic-siever and Bahco elutriator-centrifuge.

    La granulometria laser es un método rápido para determinar distribuciones de tamaños de partícula tanto en vía seca como en húmeda. En este trabajo la técnica de difracción por rayos laser se aplica al estudio granulométrico de materiales puzolánicos en suspensión. Estos análisis granulométricos se cotejan con los obtenidos con la técnica tamizador-neumático Alpine y elutriador-centrifugador Bahco.

  10. Experimental Studies of Active and Passive Flow Control Techniques Applied in a Twin Air-Intake

    Directory of Open Access Journals (Sweden)

    Akshoy Ranjan Paul

    2013-01-01

    Full Text Available The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ and a vane-type passive vortex generator (VG and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel and counterrotating (V-shape are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG.

  11. Image restoration techniques as applied to Landsat MSS and TM data

    Science.gov (United States)

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  12. New technology of dry benefication of fly ash from coal power plants using applied mineralogy techniques

    Directory of Open Access Journals (Sweden)

    В. А. Арсентьев

    2016-08-01

    Full Text Available The existence of environmental and strategic need to process dumps and slagheaps of coal mining enterprises of Russia and foreign countries results in reviewing the potential of using fly ash as a technogenic mineral resource. Comprehensive studies of substance composition of fly ash from coal power plants make it possible to define rational further ways of utilizing that mineral resource substantiating the scheme of its technological secondary processing. In view of the numerous environmental problems stemming from the techniques of wet benefication and processing of that mineral resource, a technology is suggested of dry cleaning of fly ash from thermal coal power plants. Studies were carried out using a number of samples of fly ash from various power plants. The suggested criteria are used to discriminate the compounds of fly ash and quantitative and qualitative composition of particulate matter is assessed. Studies of substance composition of fly ash samples has demonstrated that the concentration of non-combusted carbon in them varies from 5 to 20 %. The principal technological procedure of cleansing in our studies was a combination of magnetic and electric separation of ash in the state of vibrational pseudo-liquefaction. It enables one to increase the throughput capacity and selectivity of the cleansing process significantly. In the result of such cleansing a stable mineral fraction is produced that contains 0.5-2.5 % of carbon, so that the purified mineral fraction can be used as a construction binding agent.

  13. Applying Lean Techniques to Reduce Intravenous Waste Through Premixed Solutions and Increasing Production Frequency.

    Science.gov (United States)

    Lin, Alex C; Penm, Jonathan; Ivey, Marianne F; Deng, Yihong; Commins, Monica

    This study aims to use lean techniques and evaluate the impact of increasing the use of premixed IV solutions and increased IV production frequency on IV waste. Study was conducted at a tertiary hospital pharmacy department in three phases. Phase I included evaluation of IV waste when IV production occurred three times a day and eight premixed IV products were used. Phase II increased the number of premixed IV products to 16. Phase III then increased IV production to five times a day. During Phase I, an estimate of 2,673 IV doses were wasted monthly, accounting for 6.14% of overall IV doses. This accounted for 688 L that cost $60,135. During Phase II, the average monthly IV wastage reduced significantly to 1,069 doses (2.84%), accounting for 447 L and $34,003. During Phase III, the average monthly IV wastage was further decreased to 675 doses (1.69%), accounting for 78 L and $3,431. Hence, a potential annual saving of $449,208 could result from these changes. IV waste was reduced through the increased use of premixed solutions and increasing IV production frequency.

  14. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    Directory of Open Access Journals (Sweden)

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  15. Dosimetric properties of bio minerals applied to high-dose dosimetry using the TSEE technique

    International Nuclear Information System (INIS)

    Vila, G. B.; Caldas, L. V. E.

    2014-08-01

    The study of the dosimetric properties such as reproducibility, the residual signal, lower detection dose, dose-response curve and fading of the thermally stimulated emission exo electronic (TSEE) signal of Brazilian bio minerals has shown that these materials present a potential use as radiation dosimeters. The reproducibility within ± 10% for oyster shell, mother-of-pearl and coral reef samples showed that the signal dispersion is small when compared with the mean value of the measurements. The study showed that the residual signal can be eliminated with a thermal treatment at 300 grades C/1 h. The lower detection dose of 9.8 Gy determined for the oyster shell samples when exposed to beta radiation and 1.6 Gy for oyster shell and mother-of-pearl samples when exposed to gamma radiation can be considered good, taking into account the high doses of this study. The materials presented linearity at the dose response curves in some ranges, but the lack of linearity in other cases presents no problem since a good mathematical description is possible. The fading study showed that the loss of TSEE signal can be minimized if the samples are protected from interferences such as light, heat and humidity. Taking into account the useful linearity range as the main dosimetric characteristic, the tiger shell and oyster shell samples are the most suitable for high-dose dosimetry using the TSEE technique. (Author)

  16. Dosimetric properties of bio minerals applied to high-dose dosimetry using the TSEE technique

    Energy Technology Data Exchange (ETDEWEB)

    Vila, G. B.; Caldas, L. V. E., E-mail: gbvila@ipen.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    The study of the dosimetric properties such as reproducibility, the residual signal, lower detection dose, dose-response curve and fading of the thermally stimulated emission exo electronic (TSEE) signal of Brazilian bio minerals has shown that these materials present a potential use as radiation dosimeters. The reproducibility within ± 10% for oyster shell, mother-of-pearl and coral reef samples showed that the signal dispersion is small when compared with the mean value of the measurements. The study showed that the residual signal can be eliminated with a thermal treatment at 300 grades C/1 h. The lower detection dose of 9.8 Gy determined for the oyster shell samples when exposed to beta radiation and 1.6 Gy for oyster shell and mother-of-pearl samples when exposed to gamma radiation can be considered good, taking into account the high doses of this study. The materials presented linearity at the dose response curves in some ranges, but the lack of linearity in other cases presents no problem since a good mathematical description is possible. The fading study showed that the loss of TSEE signal can be minimized if the samples are protected from interferences such as light, heat and humidity. Taking into account the useful linearity range as the main dosimetric characteristic, the tiger shell and oyster shell samples are the most suitable for high-dose dosimetry using the TSEE technique. (Author)

  17. Applied Focused Ion Beam Techniques for Sample Preparation of Astromaterials for Integrated Nano-Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Graham, G A; Teslich, N E; Kearsley, A T; Stadermann, F J; Stroud, R M; Dai, Z R; Ishii, H A; Hutcheon, I D; Bajt, S; Snead, C J; Weber, P K; Bradley, J P

    2007-02-20

    Sample preparation is always a critical step in study of micrometer sized astromaterials available for study in the laboratory, whether their subsequent analysis is by electron microscopy or secondary ion mass spectrometry. A focused beam of gallium ions has been used to prepare electron transparent sections from an interplanetary dust particle, as part of an integrated analysis protocol to maximize the mineralogical, elemental, isotopic and spectroscopic information extracted from one individual particle. In addition, focused ion beam techniques have been employed to extract cometary residue preserved on the rims and walls of micro-craters in 1100 series aluminum foils that were wrapped around the sample tray assembly on the Stardust cometary sample collector. Non-ideal surface geometries and inconveniently located regions of interest required creative solutions. These include support pillar construction and relocation of a significant portion of sample to access a region of interest. Serial sectioning, in a manner similar to ultramicrotomy, is a significant development and further demonstrates the unique capabilities of focused ion beam microscopy for sample preparation of astromaterials.

  18. From birds to bees: applying video observation techniques to invertebrate pollinators

    Directory of Open Access Journals (Sweden)

    C J Lortie

    2012-01-01

    Full Text Available Observation is a critical element of behavioural ecology and ethology. Here, we propose a similar set of techniques to enhance the study of the diversity patterns of invertebrate pollinators and associated plant species. In a body of avian research, cameras are set up on nests in blinds to examine chick and parent interactions. This avoids observer bias, minimizes interference, and provides numerous other benefits including timestamps, the capacity to record frequency and duration of activities, and provides a permanent archive of activity for later analyses. Hence, we propose that small video cameras in blinds can also be used to continuously monitor pollinator activity on plants thereby capitalizing on those same benefits. This method was proofed in 2010 in the alpine in BC, Canada on target focal plant species and on open mixed assemblages of plant species. Apple ipod nanos successfully recorded activity for an entire day at a time totalling 450 hours and provided sufficient resolution and field of view to both identify pollinators to recognizable taxonomic units and monitor movement and visitation rates at a scale of view of approximately 50 cm2. This method is not a replacement for pan traps or sweep nets but an opportunity to enhance these datasets with more detailed, finer-resolution data. Importantly, the test of this specific method also indicates that far more hours of observation - using any method - are likely required than most current ecological studies published to accurately estimate pollinator diversity.

  19. Techniques for applying subatmospheric pressure dressing to wounds in difficult regions of anatomy.

    Science.gov (United States)

    Greer, S E; Duthie, E; Cartolano, B; Koehler, K M; Maydick-Youngberg, D; Longaker, M T

    1999-09-01

    Subatmospheric pressure dressing (SPD) has been commercially available in the United States since 1995 as the vacuum-assisted closure (VAC) device. SPD increases local blood flow, decreases edema and bacterial count, and promotes the formation of granulation tissue. Despite recent clinical successes with the use of SPD in a variety of wound types, problems may occur with application of VAC system in certain areas of the body. The main limitation occurs when attempting to maintain an airtight seal over irregular surfaces surrounding a wound. For example, application of the adhesive drape and creation of a seal are particularly difficulty in the hip and perineum. In addition, wounds of the lower extremity can occur in multiple sites, posing the problem of providing a vacuum dressing to more than one wound from one suction pump machine. To address these challenging clinical wounds, we have developed techniques to allow the successful application of SPD to sacral pressure ulcers near the anus, and to multiple large lower extremity ulcers.

  20. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  1. Applying spectral data analysis techniques to aquifer monitoring data in Belvoir Ranch, Wyoming

    Science.gov (United States)

    Gao, F.; He, S.; Zhang, Y.

    2017-12-01

    This study uses spectral data analysis techniques to estimate the hydraulic parameters from water level fluctuation due to tide effect and barometric effect. All water level data used in this study are collected in Belvoir Ranch, Wyoming. Tide effect can be not only observed in coastal areas, but also in inland confined aquifers. The force caused by changing positions of sun and moon affects not only ocean but also solid earth. The tide effect has an oscillatory pumping or injection sequence to the aquifer, and can be observed from dense water level monitoring. Belvoir Ranch data are collected once per hour, thus is dense enough to capture the tide effect. First, transforming de-trended data from temporal domain to frequency domain with Fourier transform method. Then, the storage coefficient can be estimated using Bredehoeft-Jacob model. After this, analyze the gain function, which expresses the amplification and attenuation of the output signal, and derive barometric efficiency. Next, find effective porosity with storage coefficient and barometric efficiency with Jacob's model. Finally, estimate aquifer transmissivity and hydraulic conductivity using Paul Hsieh's method. The estimated hydraulic parameters are compared with those from traditional pumping data estimation. This study proves that hydraulic parameter can be estimated by only analyze water level data in frequency domain. It has the advantages of low cost and environmental friendly, thus should be considered for future use of hydraulic parameter estimations.

  2. Interdigital filter design

    CSIR Research Space (South Africa)

    Du Plessis, WP

    2009-10-01

    Full Text Available A new synthesis procedure for interdigital filters with shorted-pin feeds is developed by relating the coupling factors and external Qs to the physical structure of the filter. This new procedure is easily understood and applied, extremely flexible...

  3. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

    DEFF Research Database (Denmark)

    Nicoletti, Rodrigo; Santos, Ilmar

    2003-01-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication. For furt......The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication....... For further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...... vibration reduction of unbalance response of a rigid rotor, where the PD and the non-linear P controllers show better performance for the frequency range of study (0 to 80 Hz). The feasibility of eliminating rotor-bearing instabilities (phenomena of whirl) by using active lubrication is also investigated...

  4. Detecting Proxima b’s Atmosphere with JWST Targeting CO{sub 2} at 15 μ m Using a High-pass Spectral Filtering Technique

    Energy Technology Data Exchange (ETDEWEB)

    Snellen, I. A. G.; Van Dishoeck, E. F.; Brandl, B. R.; Van Eylen, V. [Leiden Observatory, Leiden University, Postbus 9513, 2300 RA Leiden (Netherlands); Désert, J.-M.; Waters, L. B. F. M.; Dominik, C.; Birkby, J. L. [Anton Pannekoek Institute for Astronomy, University of Amsterdam, P.O. Box 94249, 1090 GE Amsterdam (Netherlands); Robinson, T. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Meadows, V. [Astronomy Department, University of Washington (United States); Henning, T.; Bouwman, J. [Max-Planck-Institute for Astronomy, Koenigstuhl 17, D-69117 Heidelberg (Germany); Lahuis, F.; Min, M. [SRON Netherlands Institute for Space Research, Sorbonnelaan 2, 3584 CA Utrecht (Netherlands); Lovis, C. [Observatoire de Genève, Université de Genève, 51 chemin des Maillettes, 1290 Versoix (Switzerland); Sing, D. [School of Physics, University of Exeter, Exeter (United Kingdom); Anglada-Escudé, G. [School of Physics and Astronomy, Queen Mary University of London, 327 Mile End Road, London E1 4NS (United Kingdom); Brogi, M., E-mail: snellen@strw.leidenuniv.nl [Center for Astrophysics and Space Astronomy, University of Colorado at Boulder, Boulder, CO 80309 (United States)

    2017-08-01

    Exoplanet Proxima b will be an important laboratory for the search for extraterrestrial life for the decades ahead. Here, we discuss the prospects of detecting carbon dioxide at 15 μ m using a spectral filtering technique with the Medium Resolution Spectrograph (MRS) mode of the Mid-Infrared Instrument (MIRI) on the James Webb Space Telescope ( JWST ). At superior conjunction, the planet is expected to show a contrast of up to 100 ppm with respect to the star. At a spectral resolving power of R  = 1790–2640, about 100 spectral CO{sub 2} features are visible within the 13.2–15.8 μ m (3B) band, which can be combined to boost the planet atmospheric signal by a factor of 3–4, depending on the atmospheric temperature structure and CO{sub 2} abundance. If atmospheric conditions are favorable (assuming an Earth-like atmosphere), with this new application to the cross-correlation technique, carbon dioxide can be detected within a few days of JWST observations. However, this can only be achieved if both the instrumental spectral response and the stellar spectrum can be determined to a relative precision of ≤1 × 10{sup −4} between adjacent spectral channels. Absolute flux calibration is not required, and the method is insensitive to the strong broadband variability of the host star. Precise calibration of the spectral features of the host star may only be attainable by obtaining deep observations of the system during inferior conjunction that serve as a reference. The high-pass filter spectroscopic technique with the MIRI MRS can be tested on warm Jupiters, Neptunes, and super-Earths with significantly higher planet/star contrast ratios than the Proxima system.

  5. Detecting Proxima b’s Atmosphere with JWST Targeting CO2 at 15 μm Using a High-pass Spectral Filtering Technique

    Science.gov (United States)

    Snellen, I. A. G.; Désert, J.-M.; Waters, L. B. F. M.; Robinson, T.; Meadows, V.; van Dishoeck, E. F.; Brandl, B. R.; Henning, T.; Bouwman, J.; Lahuis, F.; Min, M.; Lovis, C.; Dominik, C.; Van Eylen, V.; Sing, D.; Anglada-Escudé, G.; Birkby, J. L.; Brogi, M.

    2017-08-01

    Exoplanet Proxima b will be an important laboratory for the search for extraterrestrial life for the decades ahead. Here, we discuss the prospects of detecting carbon dioxide at 15 μm using a spectral filtering technique with the Medium Resolution Spectrograph (MRS) mode of the Mid-Infrared Instrument (MIRI) on the James Webb Space Telescope (JWST). At superior conjunction, the planet is expected to show a contrast of up to 100 ppm with respect to the star. At a spectral resolving power of R = 1790-2640, about 100 spectral CO2 features are visible within the 13.2-15.8 μm (3B) band, which can be combined to boost the planet atmospheric signal by a factor of 3-4, depending on the atmospheric temperature structure and CO2 abundance. If atmospheric conditions are favorable (assuming an Earth-like atmosphere), with this new application to the cross-correlation technique, carbon dioxide can be detected within a few days of JWST observations. However, this can only be achieved if both the instrumental spectral response and the stellar spectrum can be determined to a relative precision of ≤1 × 10-4 between adjacent spectral channels. Absolute flux calibration is not required, and the method is insensitive to the strong broadband variability of the host star. Precise calibration of the spectral features of the host star may only be attainable by obtaining deep observations of the system during inferior conjunction that serve as a reference. The high-pass filter spectroscopic technique with the MIRI MRS can be tested on warm Jupiters, Neptunes, and super-Earths with significantly higher planet/star contrast ratios than the Proxima system.

  6. A 4D Filtering and Calibration Technique for Small-Scale Point Cloud Change Detection with a Terrestrial Laser Scanner

    Directory of Open Access Journals (Sweden)

    Ryan A. Kromer

    2015-10-01

    Full Text Available This study presents a point cloud de-noising and calibration approach that takes advantage of point redundancy in both space and time (4D. The purpose is to detect displacements using terrestrial laser scanner data at the sub-mm scale or smaller, similar to radar systems, for the study of very small natural changes, i.e., pre-failure deformation in rock slopes, small-scale failures or talus flux. The algorithm calculates distances using a multi-scale normal distance approach and uses a set of calibration point clouds to remove systematic errors. The median is used to filter distance values for a neighbourhood in space and time to reduce random type errors. The use of space and time neighbours does need to be optimized to the signal being studied, in order to avoid smoothing in either spatial or temporal domains. This is demonstrated in the application of the algorithm to synthetic and experimental case examples. Optimum combinations of space and time neighbours in practical applications can lead to an improvement of an order or two of magnitude in the level of detection for change, which will greatly improve our ability to detect small changes in many disciplines, such as rock slope pre-failure deformation, deformation in civil infrastructure and small-scale geomorphological change.

  7. Anticorrosive coating of SixOyCz on metallic substrates applied with the plasma CVD technique

    International Nuclear Information System (INIS)

    Perillo, P; Lasorsa, C; Versaci, R

    2006-01-01

    This work deals with the production of anticorrosive coatings of Si x O y C z on metallic substrates by PECVD (Plasma Enhanced Chemical Vapor Deposition) in a two layer coating, with a gaseous mixture using methyltrimethoxysilane (Z6070) with the contribution of O 2 and methane as reactive gases. The process involves two steps, the first with the substrate thermalized to 500 o C and the second step with the substrate at room temperature. In the first step the process is carried out with the mixture of O 2 and Z6070, in the second step methane is added to the mixture of the plasma forming gases. The coatings were carried out on AISI 410 stainless steel, AISI M2 steel, titanium and AA6061 aluminum substrates. This work presents the preliminary results of the electrochemical evaluation and the mechanical properties of the coating. Fourier transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS/ESCA ), and scanning electron microscopy were used for this study. Electrochemical techniques were used to study the reaction to the corrosion of the coatings. Potentiodynamic polarization curves were prepared in a solution of 5% H 2 SO 4 and in NaCl 0,1M. The tests were undertaken at room temperature. This process is presented as an alternative to the conventional immersion processes by the sol-gel method, which produces the polymerization of the reagent as a result of the effect of the oxygen from the environment, while the plasma process produces very different chemical reactions in the center of the plasma itself with coatings also different (CW)

  8. Applying industrial process improvement techniques to increase efficiency in a surgical practice.

    Science.gov (United States)

    Reznick, David; Niazov, Lora; Holizna, Eric; Siperstein, Allan

    2014-10-01

    The goal of this study was to examine how industrial process improvement techniques could help streamline the preoperative workup. Lean process improvement was used to streamline patient workup at an endocrine surgery service at a tertiary medical center utilizing multidisciplinary collaboration. The program consisted of several major changes in how patients are processed in the department. The goal was to shorten the wait time between initial call and consult visit and between consult and surgery. We enrolled 1,438 patients enrolled in the program. The wait time from the initial call until consult was reduced from 18.3 ± 0.7 to 15.4 ± 0.9 days. Wait time from consult until operation was reduced from 39.9 ± 1.5 to 33.9 ± 1.3 days for the overall practice and to 15.0 ± 4.8 days for low-risk patients. Patient cancellations were reduced from 27.9 ± 2.4% to 17.3 ± 2.5%. Overall patient flow increased from 30.9 ± 5.1 to 52.4 ± 5.8 consults per month (all P process improvement methodology, surgery patients can benefit from an improved, streamlined process with significant reduction in wait time from call to initial consult and initial consult to surgery, with reduced cancellations. This generalized process has resulted in increased practice throughput and efficiency and is applicable to any surgery practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Applying Geospatial Techniques to Investigate Boundary Layer Land-Atmosphere Interactions Involved in Tornadogensis

    Science.gov (United States)

    Weigel, A. M.; Griffin, R.; Knupp, K. R.; Molthan, A.; Coleman, T.

    2017-12-01

    Northern Alabama is among the most tornado-prone regions in the United States. This region has a higher degree of spatial variability in both terrain and land cover than the more frequently studied North American Great Plains region due to its proximity to the southern Appalachian Mountains and Cumberland Plateau. More research is needed to understand North Alabama's high tornado frequency and how land surface heterogeneity influences tornadogenesis in the boundary layer. Several modeling and simulation studies stretching back to the 1970's have found that variations in the land surface induce tornadic-like flow near the surface, illustrating a need for further investigation. This presentation introduces research investigating the hypothesis that horizontal gradients in land surface roughness, normal to the direction of flow in the boundary layer, induce vertically oriented vorticity at the surface that can potentially aid in tornadogenesis. A novel approach was implemented to test this hypothesis using a GIS-based quadrant pattern analysis method. This method was developed to quantify spatial relationships and patterns between horizontal variations in land surface roughness and locations of tornadogenesis. Land surface roughness was modeled using the Noah land surface model parameterization scheme which, was applied to MODIS 500 m and Landsat 30 m data in order to compare the relationship between tornadogenesis locations and roughness gradients at different spatial scales. This analysis found a statistical relationship between areas of higher roughness located normal to flow surrounding tornadogenesis locations that supports the tested hypothesis. In this presentation, the innovative use of satellite remote sensing data and GIS technologies to address interactions between the land and atmosphere will be highlighted.

  10. Applying a novel combination of techniques to develop a predictive model for diabetes complications.

    Science.gov (United States)

    Sangi, Mohsen; Win, Khin Than; Shirvani, Farid; Namazi-Rad, Mohammad-Reza; Shukla, Nagesh

    2015-01-01

    Among the many related issues of diabetes management, its complications constitute the main part of the heavy burden of this disease. The aim of this paper is to develop a risk advisor model to predict the chances of diabetes complications according to the changes in risk factors. As the starting point, an inclusive list of (k) diabetes complications and (n) their correlated predisposing factors are derived from the existing endocrinology text books. A type of data meta-analysis has been done to extract and combine the numeric value of the relationships between these two. The whole n (risk factors) - k (complications) model was broken down into k different (n-1) relationships and these (n-1) dependencies were broken into n (1-1) models. Applying regression analysis (seven patterns) and artificial neural networks (ANN), we created models to show the (1-1) correspondence between factors and complications. Then all 1-1 models related to an individual complication were integrated using the naïve Bayes theorem. Finally, a Bayesian belief network was developed to show the influence of all risk factors and complications on each other. We assessed the predictive power of the 1-1 models by R2, F-ratio and adjusted R2 equations; sensitivity, specificity and positive predictive value were calculated to evaluate the final model using real patient data. The results suggest that the best fitted regression models outperform the predictive ability of an ANN model, as well as six other regression patterns for all 1-1 models.

  11. Applying a novel combination of techniques to develop a predictive model for diabetes complications.

    Directory of Open Access Journals (Sweden)

    Mohsen Sangi

    Full Text Available Among the many related issues of diabetes management, its complications constitute the main part of the heavy burden of this disease. The aim of this paper is to develop a risk advisor model to predict the chances of diabetes complications according to the changes in risk factors. As the starting point, an inclusive list of (k diabetes complications and (n their correlated predisposing factors are derived from the existing endocrinology text books. A type of data meta-analysis has been done to extract and combine the numeric value of the relationships between these two. The whole n (risk factors - k (complications model was broken down into k different (n-1 relationships and these (n-1 dependencies were broken into n (1-1 models. Applying regression analysis (seven patterns and artificial neural networks (ANN, we created models to show the (1-1 correspondence between factors and complications. Then all 1-1 models related to an individual complication were integrated using the naïve Bayes theorem. Finally, a Bayesian belief network was developed to show the influence of all risk factors and complications on each other. We assessed the predictive power of the 1-1 models by R2, F-ratio and adjusted R2 equations; sensitivity, specificity and positive predictive value were calculated to evaluate the final model using real patient data. The results suggest that the best fitted regression models outperform the predictive ability of an ANN model, as well as six other regression patterns for all 1-1 models.

  12. Filter arrays

    Energy Technology Data Exchange (ETDEWEB)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  13. APPLIED PHYTO-REMEDIATION TECHNIQUES USING HALOPHYTES FOR OIL AND BRINE SPILL SCARS

    Energy Technology Data Exchange (ETDEWEB)

    M.L. Korphage; Bruce G. Langhus; Scott Campbell

    2003-03-01

    Produced salt water from historical oil and gas production was often managed with inadequate care and unfortunate consequences. In Kansas, the production practices in the 1930's and 1940's--before statewide anti-pollution laws--were such that fluids were often produced to surface impoundments where the oil would segregate from the salt water. The oil was pumped off the pits and the salt water was able to infiltrate into the subsurface soil zones and underlying bedrock. Over the years, oil producing practices were changed so that segregation of fluids was accomplished in steel tanks and salt water was isolated from the natural environment. But before that could happen, significant areas of the state were scarred by salt water. These areas are now in need of economical remediation. Remediation of salt scarred land can be facilitated with soil amendments, land management, and selection of appropriate salt tolerant plants. Current research on the salt scars around the old Leon Waterflood, in Butler County, Kansas show the relative efficiency of remediation options. Based upon these research findings, it is possible to recommend cost efficient remediation techniques for slight, medium, and heavy salt water damaged soil. Slight salt damage includes soils with Electrical Conductivity (EC) values of 4.0 mS/cm or less. Operators can treat these soils with sufficient amounts of gypsum, install irrigation systems, and till the soil. Appropriate plants can be introduced via transplants or seeded. Medium salt damage includes soils with EC values between 4.0 and 16 mS/cm. Operators will add amendments of gypsum, till the soil, and arrange for irrigation. Some particularly salt tolerant plants can be added but most planting ought to be reserved until the second season of remediation. Severe salt damage includes soil with EC values in excess of 16 mS/cm. Operators will add at least part of the gypsum required, till the soil, and arrange for irrigation. The following

  14. Improving building energy modelling by applying advanced 3D surveying techniques on agri-food facilities

    Directory of Open Access Journals (Sweden)

    Francesco Barreca

    2017-09-01

    advanced surveying techniques, such as a terrestrial laser scanner and an infrared camera, it is possible to create a three-dimensional parametric model, while, thanks to the heat flow meter measurement method, it is also possible to obtain a thermophysical model. This model allows assessing the energy performance of agri-food buildings in order to improve the indoor microclimate control and the conditions of food processing and conservation.

  15. Lipase immobilized by different techniques on various support materials applied in oil hydrolysis

    Directory of Open Access Journals (Sweden)

    VILMA MINOVSKA

    2005-04-01

    Full Text Available Batch hydrolysis of olive oil was performed by Candida rugosa lipase immobilized on Amberlite IRC-50 and Al2O3. These two supports were selected out of 16 carriers: inorganic materials (sand, silica gel, infusorial earth, Al2O3, inorganic salts (CaCO3, CaSO4, ion-exchange resins (Amberlite IRC-50 and IR-4B, Dowex 2X8, a natural resin (colophony, a natural biopolymer (sodium alginate, synthetic polymers (polypropylene, polyethylene and zeolites. Lipase immobilization was carried out by simple adsorption, adsorption followed by cross-linking, adsorption on ion-exchange resins, combined adsorption and precipitation, pure precipitation and gel entrapment. The suitability of the supports and techniques for the immobilization of lipase was evaluated by estimating the enzyme activity, protein loading, immobilization efficiency and reusability of the immobilizates. Most of the immobilizates exhibited either a low enzyme activity or difficulties during the hydrolytic reaction. Only those prepared by ionic adsorption on Amberlite IRC-50 and by combined adsorption and precipitation on Al2O3 showed better activity, 2000 and 430 U/g support, respectively, and demonstrated satisfactory behavior when used repeatedly. The hydrolysis was studied as a function of several parameters: surfactant concentration, enzyme concentration, pH and temperature. The immobilized preparation with Amberlite IRC-50 was stable and active in the whole range of pH (4 to 9 and temperature (20 to 50 °C, demonstrating a 99% degree of hydrolysis. In repeated usage, it was stable and active having a half-life of 16 batches, which corresponds to an operation time of 384 h. Its storage stability was remarkable too, since after 9 months it had lost only 25 % of the initial activity. The immobilizate with Al22O3 was less stable and less active. At optimal environmental conditions, the degree of hydrolysis did not exceed 79 %. In repeated usage, after the fourth batch, the degree of

  16. Nuclear analytical techniques applied to the research on biokinetics of incorporated radionuclides for internal dosimetry

    International Nuclear Information System (INIS)

    Cantone, M.C.

    2005-01-01

    Full text: The presentation intends to discuss the contribution that techniques of analysis, based on activation analysis or mass spectrometry, can give to a very selected item of the protection against ionizing radiation: the biokinetics of relevant elements. The assessment of radiation dose to body tissues, following intakes of radionuclides in occupational, accidental exposures and environmental exposure in case of dispersion in the environment of radio contaminants of potential concerns, is essential to evaluate and manage the the related radiological risk, including the decisions and actions to be undertake. Internal dose is not directly measurable and the International Commission on Radiological Protection ICRP has developed models which describes the behavior of the substances in the human body, following their entry ways by inhalation or ingestion. Generally, all the available sources of information contribute in the modeling process, including studies on animals, use of chemical analogues and, obviously direct information on humans, which is definitely the preferred source on which a biokinetic model can be based. Biokinetic data on human are available for most of the biological essential elements (Fe, Zn, Cu, Se) and for some elements the metabolic behavior is well know due to their use in clinical application (I, Sr, Tc), moreover research is in progress for non-essential alpha emitters. However, for a number of element, including elements with radionuclide of radiological significance in case of environmental contamination (Ru, Zr, Ce, Te and Mo), human data are poor or missing and biokinetic parameters are essentially extrapolated from data on animals. The use of stable isotopes is a publicly well acceptable and ethically justifiable method, compared to the use of radioisotopes, when volunteer subjects are considered in the investigations. The design of the investigation is based on the double tracer approach: one isotope is given orally and a second

  17. Desensitized Optimal Filtering and Sensor Fusion Tool Kit, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Research on desensitized optimal filtering techniques and a navigation and sensor fusion tool kit using advanced filtering techniques is proposed. Research focuses...

  18. Fielding the magnetically applied pressure-shear technique on the Z accelerator (completion report for MRT 4519).

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, C. Scott; Haill, Thomas A.; Dalton, Devon Gardner; Rovang, Dean Curtis; Lamppa, Derek C.

    2013-09-01

    The recently developed Magnetically Applied Pressure-Shear (MAPS) experimental technique to measure material shear strength at high pressures on magneto-hydrodynamic (MHD) drive pulsed power platforms was fielded on August 16, 2013 on shot Z2544 utilizing hardware set A0283A. Several technical and engineering challenges were overcome in the process leading to the attempt to measure the dynamic strength of NNSA Ta at 50 GPa. The MAPS technique relies on the ability to apply an external magnetic field properly aligned and time correlated with the MHD pulse. The load design had to be modified to accommodate the external field coils and additional support was required to manage stresses from the pulsed magnets. Further, this represents the first time transverse velocity interferometry has been applied to diagnose a shot at Z. All subsystems performed well with only minor issues related to the new feed design which can be easily addressed by modifying the current pulse shape. Despite the success of each new component, the experiment failed to measure strength in the samples due to spallation failure, most likely in the diamond anvils. To address this issue, hydrocode simulations are being used to evaluate a modified design using LiF windows to minimize tension in the diamond and prevent spall. Another option to eliminate the diamond material from the experiment is also being investigated.

  19. Structural, nanomechanical and variable range hopping conduction behavior of nanocrystalline carbon thin films deposited by the ambient environment assisted filtered cathodic jet carbon arc technique

    Energy Technology Data Exchange (ETDEWEB)

    Panwar, O.S., E-mail: ospanwar@mail.nplindia.ernet.in [Polymorphic Carbon Thin Films Group, Physics of Energy Harvesting Division, CSIR-National Physical Laboratory, Dr. K. S. Krishnan Road, New Delhi - 110 012 (India); Rawal, Ishpal; Tripathi, R.K. [Polymorphic Carbon Thin Films Group, Physics of Energy Harvesting Division, CSIR-National Physical Laboratory, Dr. K. S. Krishnan Road, New Delhi - 110 012 (India); Srivastava, A.K. [Electron and Ion Microscopy, Sophisticated and Analytical Instruments, CSIR-National Physical Laboratory, Dr. K. S. Krishnan Road, New Delhi - 110 012 (India); Kumar, Mahesh [Ultrafast Opto-Electronics and Tetrahertz Photonics Group, CSIR-National Physical Laboratory, Dr. K. S. Krishnan Road, New Delhi - 110 012 (India)

    2015-04-15

    Highlights: • Nanocrystalline carbon thin films are grown by filtered cathodic jet carbon arc process. • Effect of gaseous environment on the properties of carbon films has been studied. • The structural and nanomechanical properties of carbon thin films have been studied. • The VRH conduction behavior in nanocrystalline carbon thin films has been studied. - Abstract: This paper reports the deposition and characterization of nanocrystalline carbon thin films by filtered cathodic jet carbon arc technique assisted with three different gaseous environments of helium, nitrogen and hydrogen. All the films are nanocrystalline in nature as observed from the high resolution transmission electron microscopic (HRTEM) measurements, which suggests that the nanocrystallites of size ∼10–50 nm are embedded though out the amorphous matrix. X-ray photoelectron spectroscopic studies suggest that the film deposited under the nitrogen gaseous environment has the highest sp{sup 3}/sp{sup 2} ratio accompanied with the highest hardness of ∼18.34 GPa observed from the nanoindentation technique. The film deposited under the helium gaseous environment has the highest ratio of the area under the Raman D peak to G peak (A{sub D}/A{sub G}) and the highest conductivity (∼2.23 S/cm) at room temperature, whereas, the film deposited under the hydrogen environment has the lowest conductivity value (2.27 × 10{sup −7} S/cm). The temperature dependent dc conduction behavior of all the nanocrystalline carbon thin films has been analyzed in the light of Mott’s variable range hopping (VRH) conduction mechanism and observed that all the films obey three dimension VRH conduction mechanism for the charge transport.

  20. Non-destructive electrochemical techniques applied to the corrosion evaluation of the liner structures in nuclear power plants

    International Nuclear Information System (INIS)

    Martinez, I.; Castillo, A.; Andrade, C.

    2008-01-01

    The liner structure in nuclear power plants provides containment for the operation and therefore the study of its durability and integrity during its service life is an important issue. There are several causes for the deterioration of the liner, which in general involve corrosion due to its metallic nature. The present paper is aimed at describing the assessment of corrosion problems of two liners from two different nuclear power plants, which were evaluated using non-destructive electrochemical techniques. In spite of the testing difficulties arisen, from the results extracted it can be concluded that the electrochemical techniques applied are adequate for the corrosion evaluation. They provide important information about the integrity of the structure and allow for its evolution with time to be assessed

  1. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    Science.gov (United States)

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  2. Pattern-recognition techniques applied to performance monitoring of the DSS 13 34-meter antenna control assembly

    Science.gov (United States)

    Mellstrom, J. A.; Smyth, P.

    1991-01-01

    The results of applying pattern recognition techniques to diagnose fault conditions in the pointing system of one of the Deep Space network's large antennas, the DSS 13 34-meter structure, are discussed. A previous article described an experiment whereby a neural network technique was used to identify fault classes by using data obtained from a simulation model of the Deep Space Network (DSN) 70-meter antenna system. Described here is the extension of these classification techniques to the analysis of real data from the field. The general architecture and philosophy of an autonomous monitoring paradigm is described and classification results are discussed and analyzed in this context. Key features of this approach include a probabilistic time-varying context model, the effective integration of signal processing and system identification techniques with pattern recognition algorithms, and the ability to calibrate the system given limited amounts of training data. Reported here are recognition accuracies in the 97 to 98 percent range for the particular fault classes included in the experiments.

  3. The Singular Value Filter: A General Filter Design Strategy for PCA-Based Signal Separation in Medical Ultrasound Imaging

    Science.gov (United States)

    Lin, Dan; Hossack, John A.

    2012-01-01

    A general filtering method, called the singular value filter (SVF), is presented as a framework for principal component analysis (PCA) based filter design in medical ultrasound imaging. The SVF approach operates by projecting the original data onto a new set of bases determined from PCA using singular value decomposition (SVD). The shape of the SVF weighting function, which relates the singular value spectrum of the input data to the filtering coefficients assigned to each basis function, is designed in accordance with a signal model and statistical assumptions regarding the underlying source signals. In this paper, we applied SVF for the specific application of clutter artifact rejection in diagnostic ultrasound imaging. SVF was compared to a conventional PCA-based filtering technique, which we refer to as the blind source separation (BSS) method, as well as a simple frequency-based finite impulse response (FIR) filter used as a baseline for comparison. The performance of each filter was quantified in simulated lesion images as well as experimental cardiac ultrasound data. SVF was demonstrated in both simulation and experimental results, over a wide range of imaging conditions, to outperform the BSS and FIR filtering methods in terms of contrast-to-noise ratio (CNR) and motion tracking performance. In experimental mouse heart data, SVF provided excellent artifact suppression with an average CNR improvement of 1.8 dB (P filtering was achieved using complex pulse-echo received data and non-binary filter coefficients. PMID:21693416

  4. Applying a nonlinear, pitch-catch, ultrasonic technique for the detection of kissing bonds in friction stir welds.

    Science.gov (United States)

    Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen

    2016-05-01

    Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw. Copyright © 2016

  5. Approximately Liner Phase IIR Digital Filter Banks

    Directory of Open Access Journals (Sweden)

    J. D. Ćertić

    2013-11-01

    Full Text Available In this paper, uniform and nonuniform digital filter banks based on approximately linear phase IIR filters and frequency response masking technique (FRM are presented. Both filter banks are realized as a connection of an interpolated half-band approximately linear phase IIR filter as a first stage of the FRM design and an appropriate number of masking filters. The masking filters are half-band IIR filters with an approximately linear phase. The resulting IIR filter banks are compared with linear-phase FIR filter banks exhibiting similar magnitude responses. The effects of coefficient quantization are analyzed.

  6. 2D and 3D optical diagnostic techniques applied to Madonna dei Fusi by Leonardo da Vinci

    Science.gov (United States)

    Fontana, R.; Gambino, M. C.; Greco, M.; Marras, L.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.; Sanapo, C.

    2005-06-01

    3D measurement and modelling have been traditionally applied to statues, buildings, archeological sites or similar large structures, but rarely to paintings. Recently, however, 3D measurements have been performed successfully also on easel paintings, allowing to detect and document the painting's surface. We used 3D models to integrate the results of various 2D imaging techniques on a common reference frame. These applications show how the 3D shape information, complemented with 2D colour maps as well as with other types of sensory data, provide the most interesting information. The 3D data acquisition was carried out by means of two devices: a high-resolution laser micro-profilometer, composed of a commercial distance meter mounted on a scanning device, and a laser-line scanner. The 2D data acquisitions were carried out using a scanning device for simultaneous RGB colour imaging and IR reflectography, and a UV fluorescence multispectral image acquisition system. We present here the results of the techniques described, applied to the analysis of an important painting of the Italian Reinassance: `Madonna dei Fusi', attributed to Leonardo da Vinci.

  7. Beyond Astro 101: A First Report on Applying Interactive Education Techniques to an Astronphysics Class for Majors

    Science.gov (United States)

    Perrin, Marshall D.; Ghez, A. M.

    2009-05-01

    Learner-centered interactive instruction methods now have a proven track record in improving learning in "Astro 101" courses for non-majors, but have rarely been applied to higher-level astronomy courses. Can we hope for similar gains in classes aimed at astrophysics majors, or is the subject matter too fundamentally different for those techniques to apply? We present here an initial report on an updated calculus-based Introduction to Astrophysics class at UCLA that suggests such techniques can indeed result in increased learning for major students. We augmented the traditional blackboard-derivation lectures and challenging weekly problem sets by adding online questions on pre-reading assignments (''just-in-time teaching'') and frequent multiple-choice questions in class ("Think-Pair-Share''). We describe our approach, and present examples of the new Think-Pair-Share questions developed for this more sophisticated material. Our informal observations after one term are that with this approach, students are more engaged and alert, and score higher on exams than typical in previous years. This is anecdotal evidence, not hard data yet, and there is clearly a vast amount of work to be done in this area. But our first impressions strongly encourage us that interactive methods should be able improve the astrophysics major just as they have improved Astro 101.

  8. Mechanistic study of the accelerated crucible rotation technique applied to vertical Bridgman growth of cadmium zinc telluride

    Science.gov (United States)

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-08-01

    With cadmium zinc telluride's (CZT) success as a gamma and x-ray detector material, there is need for high-quality, monocrystalline CZT in large volumes. Bridgman and gradient freeze growth methods have consistently produced material containing significant amounts of micron-sized, tellurium-rich inclusions, which are detrimental to device performance. These inclusions are believed to arise from a morphological instability of the growth interface driven by constitutional undercooling. Repeatedly rotating the crucible back and forth via the accelerated crucible rotation technique (ACRT) has been shown to reduce the size and number of inclusions. Via numerical techniques, we analyze the impact of two different applied temperature gradients, 10 K/cm and 30 K/cm, on the flow, temperature, tellurium distribution, and undercooling during growth with and without applied ACRT. Under growth without rotation, a higher axial thermal gradient results in stronger thermal-buoyancy driven flows, faster interface growth velocity, greater tellurium segregation, and stronger undercooling. ACRT improves the stability of the growth interfaces for both systems; however, contrary to conventional wisdom, the case of the shallow thermal gradient is predicted to exhibit a more stable growth interface, which may result in fewer inclusions and higher quality material.

  9. [Estimation of a nationwide statistics of hernia operation applying data mining technique to the National Health Insurance Database].

    Science.gov (United States)

    Kang, Sunghong; Seon, Seok Kyung; Yang, Yeong-Ja; Lee, Aekyung; Bae, Jong-Myon

    2006-09-01

    The aim of this study is to develop a methodology for estimating a nationwide statistic for hernia operations with using the claim database of the Korea Health Insurance Cooperation (KHIC). According to the insurance claim procedures, the claim database was divided into the electronic data interchange database (EDI_DB) and the sheet database (Paper_DB). Although the EDI_DB has operation and management codes showing the facts and kinds of operations, the Paper_DB doesn't. Using the hernia matched management code in the EDI_DB, the cases of hernia surgery were extracted. For drawing the potential cases from the Paper_DB, which doesn't have the code, the predictive model was developed using the data mining technique called SEMMA. The claim sheets of the cases that showed a predictive probability of an operation over the threshold, as was decided by the ROC curve, were identified in order to get the positive predictive value as an index of usefulness for the predictive model. Of the claim databases in 2004, 14,386 cases had hernia related management codes with using the EDI system. For fitting the models with applying the data mining technique, logistic regression was chosen rather than the neural network method or the decision tree method. From the Paper_DB, 1,019 cases were extracted as potential cases. Direct review of the sheets of the extracted cases showed that the positive predictive value was 95.3%. The results suggested that applying the data mining technique to the claim database in the KHIC for estimating the nationwide surgical statistics would be useful from the aspect of execution and cost-effectiveness.

  10. Evaluation of bond strength and thickness of adhesive layer according to the techniques of applying adhesives in composite resin restorations.

    Science.gov (United States)

    de Menezes, Fernando Carlos Hueb; da Silva, Stella Borges; Valentino, Thiago Assunção; Oliveira, Maria Angélica Hueb de Menezes; Rastelli, Alessandra Nara de Souza; Conçalves, Luciano de Souza

    2013-01-01

    Adhesive restorations have increasingly been used in dentistry, and the adhesive system application technique may determine the success of the restorative procedure. The aim of this study was to evaluate the influence of the application technique of two adhesive systems (Clearfil SE Bond and Adper Scotchbond MultiPurpose) on the bond strength and adhesive layer of composite resin restorations. Eight human third molars were selected and prepared with Class I occlusal cavities. The teeth were restored with composite using various application techniques for both adhesives, according to the following groups (n = 10): group 1 (control), systems were applied and adhesive was immediately light activated for 20 seconds without removing excesses; group 2, excess adhesive was removed with a gentle jet of air for 5 seconds; group 3, excess was removed with a dry microbrushtype device; and group 4, a gentle jet of air was applied after the microbrush and then light activation was performed. After this, the teeth were submitted to microtensile testing. For the two systems tested, no statistical differences were observed between groups 1 and 2. Groups 3 and 4 presented higher bond strength values compared with the other studied groups, allowing the conclusion that excess adhesive removal with a dry microbrush could improve bond strength in composite restorations. Predominance of adhesive fracture and thicker adhesive layer were observed via scanning electron microscopy (SEM) in groups 1 and 2. For groups 3 and 4, a mixed failure pattern and thinner adhesive layer were verified. Clinicians should be aware that excess adhesive may negatively affect bond strength, whereas a thin, uniform adhesive layer appears to be favorable.

  11. Applied research on air pollution using nuclear-related analytical techniques. Report on the second research co-ordination meeting

    International Nuclear Information System (INIS)

    1995-01-01

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which started in 1992, and is scheduled to run until early 1997. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XRF, and PIXE for the analysis of toxic and other trace elements in air particulate matter. The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for research and monitoring studies on air pollution, ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural area). This document reports the discussions held during the second Research Co-ordination Meeting (RCM) for the CRP which took place at ANSTO in Menai, Australia. (author)

  12. Independent task Fourier filters

    Science.gov (United States)

    Caulfield, H. John

    2001-11-01

    Since the early 1960s, a major part of optical computing systems has been Fourier pattern recognition, which takes advantage of high speed filter changes to enable powerful nonlinear discrimination in `real time.' Because filter has a task quite independent of the tasks of the other filters, they can be applied and evaluated in parallel or, in a simple approach I describe, in sequence very rapidly. Thus I use the name ITFF (independent task Fourier filter). These filters can also break very complex discrimination tasks into easily handled parts, so the wonderful space invariance properties of Fourier filtering need not be sacrificed to achieve high discrimination and good generalizability even for ultracomplex discrimination problems. The training procedure proceeds sequentially, as the task for a given filter is defined a posteriori by declaring it to be the discrimination of particular members of set A from all members of set B with sufficient margin. That is, we set the threshold to achieve the desired margin and note the A members discriminated by that threshold. Discriminating those A members from all members of B becomes the task of that filter. Those A members are then removed from the set A, so no other filter will be asked to perform that already accomplished task.

  13. A technique for extraction and Thin Layer Chromatography visualization of fecal bile acids applied to neotropical felid scats

    Directory of Open Access Journals (Sweden)

    Ada Virginia Cazón Narvaez

    1999-06-01

    Full Text Available Fecal bile acid patterns have been used successfully to identify scats. Neotropical felid scats are capable of this biochemical identification because they present low concentrations of plant pigments that would interfere in fecal bile acids detection. However, neotropical felid scats have poor quantities of bile acids, so we developed in this work a proper technique for their extraction, visualization and determination. Twenty eighth feces of seven different felid species, collected from Zoological and Wildlife Parks, were dried and pulverized. The procedure for analyzing feces is : Take one g of pulverized feces and shake for 3 hr at room temperature in 20 ml benzene : methanol; filter and evaporate to 5 ml. Spot on TLC plate and develop in toluene :acetic acid:water. Dry and visualize with anisaldehyde. Field collected scats could be identified by the bile acids pattern revealed by this specific technique and ,then, used as a source of information for distribution, density and food habits studies.Los patrones de ácidos biliares fecales han sido utilizados satisfactoriamente para identificar heces. Las heces de félidos neotropicales son propicias para ser identificadas bioquímicamente, ya que contienen baja concentración de pigmentos vegetales que pudieran interferir en la detección de ácidos biliares. Sin embargo los ácidos biliares se encuentran en bajas concentraciones en las heces, por lo cual desarrollamos en este trabajo una técnica apropiada para su extracción, visualización y determinación. Veintiocho heces de diferentes félidos recolectadas de Zoológicos y Estaciones de Fauna Silvestre fueron secadas y pulverizadas. El procedimiento para analizar las heces es : Tomar un gramo de feca pulverizada y agitar en 20 ml de benceno :metanol a temperatura ambiente durante 3 hr ; luego filtrar y evaporar hasta 5 ml. Sembrar en placa de TLC y desarrollar en tolueno :ác. acético :agua. Secar y revelar con anisaldehído. Las heces

  14. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  15. Wien filter

    NARCIS (Netherlands)

    Mook, H.W.

    1999-01-01

    The invention relates to a Wien filter provided with electrodes for generating an electric field, and magnetic poles for generating a magnetic field, said electrodes and magnetic poles being positioned around and having a finite length along a filter axis, and being positioned around the filter axis

  16. Rectifier Filters

    Directory of Open Access Journals (Sweden)

    Y. A. Bladyko

    2010-01-01

    Full Text Available The paper contains definition of a smoothing factor which is suitable for any rectifier filter. The formulae of complex smoothing factors have been developed for simple and complex passive filters. The paper shows conditions for application of calculation formulae and filters

  17. FILTER TREATMENT

    Science.gov (United States)

    Sutton, J.B.; Torrey, J.V.P.

    1958-08-26

    A process is described for reconditioning fused alumina filters which have become clogged by the accretion of bismuth phosphate in the filter pores, The method consists in contacting such filters with faming sulfuric acid, and maintaining such contact for a substantial period of time.

  18. Comparison between 3D dynamics filter technique, field-in-field, electronic compensator in breast cancer; Comparacao entre tecnica 3D com filtro dinamico, field-in-field e compensacao eletronica para cancer de mama

    Energy Technology Data Exchange (ETDEWEB)

    Trindade, Cassia; Silva, Leonardo P.; Martins, Lais P.; Garcia, Paulo L.; Santos, Maira R.; Bastista, Delano V.S.; Vieira, Anna Myrian M.T.L.; Rocha, Igor M., E-mail: cassiatr@gmail.com [Instituto Nacional de Cancer (INCA), Rio de Janeiro, RJ (Brazil)

    2012-12-15

    The radiotherapy has been used in a wild scale in breast cancer treatment. With this high demand, new technologies have been developed to improve the dose distribution in the target while reducing the dose delivered in critical organs. In this study, performed with one clinical case, three planning were done for comparison: 3D technique with dynamic filter, 3D with field-in-field technique (forward-planned IMRT) and 3D technique using electronic compensator (ECOMP). The planning were done with a 6MV photon beam using the Eclipse software, version 8.6 (Varian Medical Systems). The PTV was drawn covering the whole breast and the critical organs were: the lung on the irradiated side, the heart, the contralateral breast and the anterior descending coronary artery (LAD). The planning using the compensator technique permitted more homogeneous dose distribution in the target volume. The V20 value of the lung on the irradiated side was 8,3% for the electronic compensator technique, 8,9% for the field-in-field technique and 8,2% for the dynamic filter technique. For the heart the dose range was 15.7 - 139.9 cGy, 16.3 - 148.4 cGy for the dynamic filter technique and 19.6 - 157.0 cGy for the field-in-field technique. The dose gradient was 11% with compensator electronic, 15% dynamic filter technique and 13% with field-in-field. The application of electronic technique in breast cancer treatment allows better dose distribution while reduces dose in critical organs, but in the same time requires a quality assurance. (author)

  19. Validation of the k-filtering technique for a signal composed of random-phase plane waves and non-random coherent structures

    Directory of Open Access Journals (Sweden)

    O. W. Roberts

    2014-12-01

    Full Text Available Recent observations of astrophysical magnetic fields have shown the presence of fluctuations being wave-like (propagating in the plasma frame and those described as being structure-like (advected by the plasma bulk velocity. Typically with single-spacecraft missions it is impossible to differentiate between these two fluctuations, due to the inherent spatio-temporal ambiguity associated with a single point measurement. However missions such as Cluster which contain multiple spacecraft have allowed for temporal and spatial changes to be resolved, using techniques such as k filtering. While this technique does not assume Taylor's hypothesis it requires both weak stationarity of the time series and that the fluctuations can be described by a superposition of plane waves with random phases. In this paper we test whether the method can cope with a synthetic signal which is composed of a combination of non-random-phase coherent structures with a mean radius d and a mean separation λ, as well as plane waves with random phase.

  20. Hybrid Filter Membrane

    Science.gov (United States)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of