WorldWideScience

Sample records for adaptive techniques applied

  1. Applying perceptual and adaptive learning techniques for teaching introductory histopathology

    Sally Krasne

    2013-01-01

    Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

  2. Acceptance and Mindfulness Techniques as Applied to Refugee and Ethnic Minority Populations with PTSD: Examples from "Culturally Adapted CBT"

    Hinton, Devon E.; Pich, Vuth; Hofmann, Stefan G.; Otto, Michael W.

    2013-01-01

    In this article we illustrate how we utilize acceptance and mindfulness techniques in our treatment (Culturally Adapted CBT, or CA-CBT) for traumatized refugees and ethnic minority populations. We present a Nodal Network Model (NNM) of Affect to explain the treatment's emphasis on body-centered mindfulness techniques and its focus on psychological…

  3. Applying contemporary statistical techniques

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  4. Applied ALARA techniques

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work

  5. Adaptive cancellation techniques

    1983-11-01

    An adaptive signal canceller has been evaluated for the enhancement of pulse signal reception during the transmission of a high power ECM jamming signal. The canceller design is based on the use of DRFM(Digital RF Memory) technology as part of an adaptive multiple tapped delay line. The study includes analysis of relationship of tap spacing and waveform bandwidth, survey of related documents in areas of sidelobe cancellers, transversal equalizers, and adaptive filters, and derivation of control equations and corresponding control processes. The simulation of overall processes included geometric analysis of the multibeam transmitting antenna, multiple reflection sources and the receiving antenna; waveforms, tap spacings and bandwidths; and alternate control algorithms. Conclusions are provided regarding practical system control algorithms, design characteristics and limitations.

  6. Techniques of English Textbooks Adaptation

    张婧雯; 杨竞欧

    2014-01-01

    This essay attempts to aim English teachers to evaluate and adapt the current English textbooks.According to different levels and majors of the students,English teachers can enhance the teaching materials and their teaching skills.This paper would provide several useful techniques for teachers to make evaluations and adaptations of using teaching materials.

  7. Adaptive Educational Software by Applying Reinforcement Learning

    Abdellah BENNANE

    2013-01-01

    The introduction of the intelligence in teaching software is the object of this paper. In software elaboration process, one uses some learning techniques in order to adapt the teaching software to characteristics of student. Generally, one uses the artificial intelligence techniques like reinforcement learning, Bayesian network in order to adapt the system to the environment internal and external conditions, and allow this system to interact efficiently with its potentials user. The intention...

  8. Adaptive Control Applied to Financial Market Data

    Šindelář, Jan; Kárný, Miroslav

    Strasbourg cedex: European Science Foundation, 2007, s. 1-6. [Advanced Mathematical Methods for Finance. Vídeň (AT), 17.09.2007-22.09.2007] R&D Projects: GA MŠk(CZ) 2C06001 Institutional research plan: CEZ:AV0Z10750506 Keywords : bayesian statistics * portfolio optimization * finance * adaptive control Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2007/si/sindelar-adaptive control applied to financial market data.pdf

  9. Adaptive Control Using Residual Mode Filters Applied to Wind Turbines

    Frost, Susan A.; Balas, Mark J.

    2011-01-01

    Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.

  10. Adaptive multiresolution computations applied to detonations

    Roussel, Olivier

    2015-01-01

    A space-time adaptive method is presented for the reactive Euler equations describing chemically reacting gas flow where a two species model is used for the chemistry. The governing equations are discretized with a finite volume method and dynamic space adaptivity is introduced using multiresolution analysis. A time splitting method of Strang is applied to be able to consider stiff problems while keeping the method explicit. For time adaptivity an improved Runge--Kutta--Fehlberg scheme is used. Applications deal with detonation problems in one and two space dimensions. A comparison of the adaptive scheme with reference computations on a regular grid allow to assess the accuracy and the computational efficiency, in terms of CPU time and memory requirements.

  11. New Adaptive Optics Technique Demonstrated

    2007-03-01

    First ever Multi-Conjugate Adaptive Optics at the VLT Achieves First Light On the evening of 25 March 2007, the Multi-Conjugate Adaptive Optics Demonstrator (MAD) achieved First Light at the Visitor Focus of Melipal, the third Unit Telescope of the Very Large Telescope (VLT). MAD allowed the scientists to obtain images corrected for the blurring effect of atmospheric turbulence over the full 2x2 arcminute field of view. This world premiere shows the promises of a crucial technology for Extremely Large Telescopes. ESO PR Photo 19a/07 ESO PR Photo 19a/07 The MCAO Demonstrator Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way which delights the poets but frustrates the astronomers, since it blurs the fine details of the images. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., approaching space conditions. Adaptive Optics systems work by means of a computer-controlled deformable mirror (DM) that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a 'wavefront sensor' (a special camera) at very high speed, many hundreds of times each second. The concept is not new. Already in 1989, the first Adaptive Optics system ever built for Astronomy (aptly named "COME-ON") was installed on the 3.6-m telescope at the ESO La Silla Observatory, as the early fruit of a highly successful continuing collaboration between ESO and French research institutes (ONERA and Observatoire de Paris). Ten years ago, ESO initiated an Adaptive Optics program to serve the needs for its frontline VLT project. Today, the Paranal Observatory is without any doubt one of the most advanced of its kind with respect to AO with no less than 7 systems currently installed (NACO, SINFONI, CRIRES and

  12. Adaptive Control Applied to Financial Market Data

    Šindelář, Jan; Kárný, Miroslav

    Vol. I. Praha : Matfyz press, 2007, s. 1-6. ISBN 978-80-7378-023-4. [Week of Doctoral Students 2007. Praha (CZ), 05.06.2007-08.06.2007] R&D Projects: GA MŠk(CZ) 2C06001 Institutional research plan: CEZ:AV0Z10750506 Keywords : baysian statistics * finance * financial engineering * stochastic control Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2007/si/sindelar-adaptive control applied to financial market data.pdf

  13. Development of applied optical techniques

    This report resents the status of researches on the applications of lasers at KAERI. A compact portable laser fluorometer detecting uranium desolved in aqueous solution was built. The laser-induced fluorescence of uranium was detected with a photomultiplier tube. A delayed gate circuit and an integrating circuit were used to process the electrical signal. A small nitrogen laser was used to excite uranium. The detecting limit is about 0.1 ppb. The effect of various acidic solutions was investigated. Standard addition technique was incorporated to improve the measuring accuracy. This instrument can be used for safety inspection of workers in the nuclear fuel cycle facilities. (Author)

  14. Development of applied optical techniques

    The objective of this project is to improve laser application techniques in nuclear industry. A small,light and portable laser induced fluorometer was developed. It was designed to compensate inner filter and quenching effects by on-line data processing during analysis of uranium in aqueous solution. Computer interface improves the accuracy and data processing capabilities of the instrument. Its detection limit is as low as 0.1 ppb of uranium. It is ready to use in routine chemical analysis. The feasible applications such as for uranium level monitoring in discards from reconversion plant or fuel fabrication plant were seriously considered with minor modification of the instrument. It will be used to study trace analysis of rare-earth elements. The IRMPD of CHF3 was carried out and the effects of buffer gases such as Ar,N2 and SF6 were investigated. The IRMPD rate was increased with increasing pressure of the reactant and buffer gases. The pressure effect of the reactant CHF3 below 0.1 Torr showed opposite results. It was considered that the competition between quenching effect and rotational hole-filling effect during intermolecular collisions plays a great role in this low pressure region. The applications of holography in nuclear fuel cycle facilities were surveyed and analyzed. Also, experimental apparatuses such as an Ar ion laser, various kinds of holographic films and several optical components were prepared. (Author)

  15. Computational optimization techniques applied to microgrids planning

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  16. Adaptive techniques in electrical impedance tomography reconstruction

    We present an adaptive algorithm for solving the inverse problem in electrical impedance tomography. To strike a balance between the accuracy of the reconstructed images and the computational efficiency of the forward and inverse solvers, we propose to combine an adaptive mesh refinement technique with the adaptive Kaczmarz method. The iterative algorithm adaptively generates the optimal current patterns and a locally-refined mesh given the conductivity estimate and solves for the unknown conductivity distribution with the block Kaczmarz update step. Simulation and experimental results with numerical analysis demonstrate the accuracy and the efficiency of the proposed algorithm. (paper)

  17. A novel online adaptive time delay identification technique

    Bayrak, Alper; Tatlicioglu, Enver

    2016-05-01

    Time delay is a phenomenon which is common in signal processing, communication, control applications, etc. The special feature of time delay that makes it attractive is that it is a commonly faced problem in many systems. A literature search on time-delay identification highlights the fact that most studies focused on numerical solutions. In this study, a novel online adaptive time-delay identification technique is proposed. This technique is based on an adaptive update law through a minimum-maximum strategy which is firstly applied to time-delay identification. In the design of the adaptive identification law, Lyapunov-based stability analysis techniques are utilised. Several numerical simulations were conducted with Matlab/Simulink to evaluate the performance of the proposed technique. It is numerically demonstrated that the proposed technique works efficiently in identifying both constant and disturbed time delays, and is also robust to measurement noise.

  18. Applying Machine Learning Techniques to ASP Solving

    Maratea, Marco; Pulina, Luca; Ricca, Francesco

    2012-01-01

    Having in mind the task of improving the solving methods for Answer Set Programming (ASP), there are two usual ways to reach this goal: (i) extending state-of-the-art techniques and ASP solvers, or (ii) designing a new ASP solver from scratch. An alternative to these trends is to build on top of state-of-the-art solvers, and to apply machine learning techniques for choosing automatically the “best” available solver on a per-instance basis. In this paper we pursue this latter direction. ...

  19. Adaptive Response Surface Techniques in Reliability Estimation

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...... determined from central composite designs. In a two phase algorithm the second order surface is adjusted to the domain of the most likely failure point and both FORM and SORM estimates are obtained. The algorithm is implemented as a safeguard algorithm so non-converged solutions are avoided. Furthermore, a...

  20. Image reconstruction techniques applied to nuclear mass models

    Morales, Irving O.; Isacker, P. Van; Velazquez, V.; Barea, J.; Mendoza-Temis, J.; Vieyra, J. C. López; Hirsch, J. G.; Frank, A.

    2010-02-01

    A new procedure is presented that combines well-known nuclear models with image reconstruction techniques. A color-coded image is built by taking the differences between measured masses and the predictions given by the different theoretical models. This image is viewed as part of a larger array in the (N,Z) plane, where unknown nuclear masses are hidden, covered by a “mask.” We apply a suitably adapted deconvolution algorithm, used in astronomical observations, to “open the window” and see the rest of the pattern. We show that it is possible to improve significantly mass predictions in regions not too far from measured nuclear masses.

  1. Image reconstruction techniques applied to nuclear mass models

    A new procedure is presented that combines well-known nuclear models with image reconstruction techniques. A color-coded image is built by taking the differences between measured masses and the predictions given by the different theoretical models. This image is viewed as part of a larger array in the (N,Z) plane, where unknown nuclear masses are hidden, covered by a 'mask'.' We apply a suitably adapted deconvolution algorithm, used in astronomical observations, to 'open the window' and see the rest of the pattern. We show that it is possible to improve significantly mass predictions in regions not too far from measured nuclear masses.

  2. Adaptive Robotic Systems Design in University of Applied Sciences

    Gunsing Jos; Gijselhart Fons; Hagemans Nyke; Jonkers Hans; Kivits Eric; Klijn Peter; Kapteijns Bart; Kroeske Diederich; Langen Hans; Oerlemans Bart; Oostindie Jan; van Stuijvenberg Joost

    2016-01-01

    In the industry for highly specialized machine building (small series with high variety and high complexity) and in healthcare a demand for adaptive robotics is rapidly coming up. Technically skilled people are not always available in sufficient numbers. A lot of know how with respect to the required technologies is available but successful adaptive robotic system designs are still rare. In our research at the university of applied sciences we incorporate new available technologies in our edu...

  3. Applying Mixed Methods Techniques in Strategic Planning

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  4. Ordering operator technique applied to open systems

    The normal ordering technique and the coherent representation is used to describe the evolution of an open system of a single oscillator, linearly coupled with an infinite number of reservoir oscillators, and it is shown how to include the dissipation and obtain the exponential decay. (Author)

  5. Parameter Identification and Adaptive Control Applied to the Inverted Pendulum

    Carlos A. Saldarriaga-Cortés

    2012-06-01

    Full Text Available This paper presents a methodology to implement an adaptive control of the inverted pendulum system; which uses the recursive square minimum method for the identification of a dynamic digital model of the plant and then, with its estimated parameters, tune in real time a pole placement control. The plant to be used is an unstable and nonlinear system. This fact, combined with the adaptive controller characteristics, allows the obtained results to be extended to a great variety of systems. The results show that the above methodology was implemented satisfactorily in terms of estimation, stability and control of such a system. It was established that adaptive techniques have a proper performance even in systems with complex features such as nonlinearity and instability.

  6. Neutron contrast techniques applied to oxide glasses

    Neutron scattering with isotopic substitution, particularly first and second difference methods, are proving to be excellent techniques for studies of the structure of oxide glasses. Several examples are given in which the measurements provide information that is difficult or impossible to obtain otherwise, for example, accurate, detailed distributions of first- to third-neighbours of Ca, Cu or Ni in silicate and phosphate glasses. In favourable cases, it is also possible to measure, directly, Ca-Ca and Ni-Ni first- and second-neighbour distributions. The relevance of complementary techniques, XAFS, differential anomalous x-ray scattering, x-ray scattering from glasses containing elements of high atomic numbers, is also discussed. (author). 6 figs., 11 refs

  7. Modern NDT techniques applied to composite parts

    There are many Non Destructive Testing (NDT) techniques used on qualifying the composite made parts. a) Composite materials pose significant challenge for defect detection, since they are non-homogenous and anisotropic b) in nature. Throughout their life cycle composites are susceptible to the formation of many defects such as delamination, matrix cracking, fiber fracture, fiber pullout and impact damage. Various NDT methods are used for qualifying composite made parts like Ultrasonic, Radiographic, and Eddy Current etc. However, the latest techniques are Infra-Red Thermography, Neutron Radiography, Optical Holography, X-ray Computed Tomography and Acoustic Microscopy. This paper deals with each type of the methods and their suitability for different kinds of composites. (author)

  8. Digital Speckle Technique Applied to Flow Visualization

    2000-01-01

    Digital speckle technique uses a laser, a CCD camera, and digital processing to generate interference fringes at the television framing rate. Its most obvious advantage is that neither darkroom facilities nor photographic wet chemical processing is required. In addition, it can be used in harsh engineering environments. This paper discusses the strengths and weaknesses of three digital speckle methodologies. (1) Digital speckle pattern interferometry (DSPI) uses an optical polarization phase shifter for visualization and measurement of the density field in a flow field. (2) Digital shearing speckle interferometry (DSSI) utilizes speckle-shearing interferometry in addition to optical polarization phase shifting. (3) Digital speckle photography (DSP) with computer reconstruction. The discussion describes the concepts, the principles and the experimental arrangements with some experimental results. The investigation shows that these three digital speckle techniques provide an excellent method for visualizing flow fields and for measuring density distributions in fluid mechanics and thermal flows.

  9. Innovative techniques applied to ABWR project engineering

    General Electric's (GE) Advanced Boiling Water Reactor (ABWR) project is characterised by the use of new production methods and tools, a document configuration system that was defined from the outset and wide-ranging, smooth communications. The project also had a large number of participating companies from different cities in the US (San Jose, San Francisco, Kansas City, Washington), Mexico (Veracruz) and Spain (Madrid). One of the basic requirements applicable to advanced nuclear power plant projects is the need for an Information Management System (IMS) which shall be valid for the entire life of the plant, which means that all the documentation must be available in electronic format. The basic engineering tool for the ABWR project is POWRTRAK, a computer application developed by Black and Veatch (B and V). POWRTRAK comprise a single database, in which each datum is stored in only one place and used in real time. It consists of various modules, some of which are associated with technical data and the generation of diagrams (CASES, application used to generate piping and instrumentation, logic and electric wiring diagrams), three-dimensional electronic mock-up, planning, purchasing management, etc. GE adapted the Odesta Document Management System (ODMS) commercial application to its documentation file/control needs. In this system all the documentation produced in the project is filed in both native and universal formats (PDF). (Author)

  10. Basic principles of applied nuclear techniques

    The technological applications of radioactive isotopes and radiation in South Africa have grown steadily since the first consignment of man-made radioisotopes reached this country in 1948. By the end of 1975 there were 412 authorised non-medical organisations (327 industries) using hundreds of sealed sources as well as their fair share of the thousands of radioisotope consignments, annually either imported or produced locally (mainly for medical purposes). Consequently, it is necessary for South African technologists to understand the principles of radioactivity in order to appreciate the industrial applications of nuclear techniques

  11. Applying Cooperative Techniques in Teaching Problem Solving

    Krisztina Barczi

    2013-12-01

    Full Text Available Teaching how to solve problems – from solving simple equations to solving difficult competition tasks – has been one of the greatest challenges for mathematics education for many years. Trying to find an effective method is an important educational task. Among others, the question arises as to whether a method in which students help each other might be useful. The present article describes part of an experiment that was designed to determine the effects of cooperative teaching techniques on the development of problem-solving skills.

  12. Applying Adapted Big Five Teamwork Theory to Agile Software Development

    Strode, Diane

    2016-01-01

    Teamwork is a central tenet of agile software development and various teamwork theories partially explain teamwork in that context. Big Five teamwork theory is one of the most influential teamwork theories, but prior research shows that the team leadership concept in this theory it is not applicable to agile software development. This paper applies an adapted form of Big Five teamwork theory to cases of agile software development. Three independent cases were drawn from a single organisation....

  13. Nuclear analytical techniques applied to forensic chemistry

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author)

  14. Fast digitizing techniques applied to scintillation detectors

    A 200 MHz 12-bit fast transient recorder card has been used for the digitization of pulses from photomultipliers coupled to organic scintillation detectors. Two modes of operation have been developed at ENEA-Frascati: a) continuous acquisition up to a maximum duration of ∼ 1.3 s corresponding to the full on-board memory (256 MSamples) of the card: in this mode, all scintillation events are recorded; b) non-continuous acquisition in which digitization is triggered by those scintillaton events whose amplitude is above a threshold value: the digitizing interval after each trigger can be set according to the typical decay time of the scintillation events; longer acquisition durations (>1.3 s) can be reached, although with dead time (needed for data storage) which depends on the incoming event rate. Several important features are provided by this novel digital approach: high count rate operation, pulse shape analysis, post-experiment data re-processing, pile-up identification and treatment. In particular, NE213 scintillators have been successfully used with this system for measurements in mixed neutron (n) and gamma (γ) radiation fields from fusion plasmas: separation between γ and neutron events is made by means of a dedicated software comparing the pulse charge integrated in two different time intervals and simultaneous neutron and γ pulse height spectra can be recorded at total count rates in the MHz range. It has been demonstrated that, for scintillation detection applications, 12-bit fast transient recorder cards offer improved performance with respect to analogue hardware; other radiation detectors where pulse identification or high count rate is required might also benefit from such digitizing techniques

  15. Adaptive resource allocation architecture applied to line tracking

    Owen, Mark W.; Pace, Donald W.

    2000-04-01

    Recent research has demonstrated the benefits of a multiple hypothesis, multiple model sonar line tracking solution, achieved at significant computational cost. We have developed an adaptive architecture that trades computational resources for algorithm complexity based on environmental conditions. A Fuzzy Logic Rule-Based approach is applied to adaptively assign algorithmic resources to meet system requirements. The resources allocated by the Fuzzy Logic algorithm include (1) the number of hypotheses permitted (yielding multi-hypothesis and single-hypothesis modes), (2) the number of signal models to use (yielding an interacting multiple model capability), (3) a new track likelihood for hypothesis generation, (4) track attribute evaluator activation (for signal to noise ratio, frequency bandwidth, and others), and (5) adaptive cluster threshold control. Algorithm allocation is driven by a comparison of current throughput rates to a desired real time rate. The Fuzzy Logic Controlled (FLC) line tracker, a single hypothesis line tracker, and a multiple hypothesis line tracker are compared on real sonar data. System resource usage results demonstrate the utility of the FLC line tracker.

  16. ADAPTIVE LIFTING BASED IMAGE COMPRESSION SCHEME WITH PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Nishat kanvel; Dr.S.Letitia,; Dr.Elwin Chandra Monie

    2010-01-01

    This paper presents an adaptive lifting scheme with Particle Swarm Optimization technique for image compression. Particle swarm Optimization technique is used to improve the accuracy of the predictionfunction used in the lifting scheme. This scheme is applied in Image compression and parameters such as PSNR, Compression Ratio and the visual quality of the image is calculated .The proposed scheme iscompared with the existing methods.

  17. Adaptive Robotic Systems Design in University of Applied Sciences

    Gunsing Jos

    2016-01-01

    Full Text Available In the industry for highly specialized machine building (small series with high variety and high complexity and in healthcare a demand for adaptive robotics is rapidly coming up. Technically skilled people are not always available in sufficient numbers. A lot of know how with respect to the required technologies is available but successful adaptive robotic system designs are still rare. In our research at the university of applied sciences we incorporate new available technologies in our education courses by way of research projects; in these projects students will investigate the application possibilities of new technologies together with companies and teachers. Thus we are able to transfer knowledge to the students including an innovation oriented attitude and skills. Last years we developed several industrial binpicking applications for logistics and machining-factories with different types of 3D vision. Also force feedback gripping has been developed including slip sensing. Especially for healthcare robotics we developed a so-called twisted wire actuator, which is very compact in combination with an underactuated gripper, manufactured in one piece in polyurethane. We work both on modeling and testing the functions of these designs but we work also on complete demonstrator systems. Since the amount of disciplines involved in complex product and machine design increases rapidly we pay a lot of attention with respect to systems engineering methods. Apart from the classical engineering disciplines like mechanical, electrical, software and mechatronics engineering, especially for adaptive robotics more and more disciplines like industrial product design, communication … multimedia design and of course physics and even art are to be involved depending on the specific application to be designed. Design tools like V-model, agile/scrum and design-approaches to obtain the best set of requirements are being implemented in the engineering studies from

  18. Adapted G-mode Clustering Method applied to Asteroid Taxonomy

    Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.

    2013-11-01

    The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.

  19. Comparison of optimization techniques applied to nuclear fuel reload design

    In this work a comparison of three techniques of optimization is presented applied to the design of the recharge of fuel in reactors of water in boil. In short, the techniques were applied to the design of a recharge of a cycle of balance of 18 months of the Laguna Verde Nucleo electric Central. The used techniques were Genetic Algorithms, Taboo Search and Neural Nets. The conditions to apply the different techniques were the same ones. The comparison of the results quality and the computational resources required to obtain them, it indicates that with the Taboo Search better results are achieved but the computational cost is very big. On the other hand the neural net with low computational cost obtains acceptable results. Additionally to this comparison, in this work a summary of the works that have been carried out for the fuel recharges optimization from the years 60 until the present time is presented. (Author)

  20. ARTIFICIAL INTELLIGENCE PLANNING TECHNIQUES FOR ADAPTIVE VIRTUAL COURSE CONSTRUCTION

    NÉSTOR DARÍO DUQUE; DEMETRIO ARTURO OVALLE

    2011-01-01

    This paper aims at presenting a planning model for adapting the behavior of virtual courses based on artificial intelligence techniques, in particular using not only a multi-agent system approach, but also artificial intelligence planning methods. The design and implementation of the system by means of a pedagogical multi-agent approach and the definition of a framework to specify the adaptation strategy allow us to incorporate several pedagogical and technological approaches that are in acco...

  1. Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    Li, Lihua; Coon, Michael; McLinden, Matthew

    2013-01-01

    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression

  2. An adaptive envelope spectrum technique for bearing fault detection

    In this work, an adaptive envelope spectrum (AES) technique is proposed for bearing fault detection, especially for analyzing signals with transient events. The proposed AES technique first modulates the signal using the empirical mode decomposition to formulate the representative intrinsic mode functions (IMF), and then a novel IMF reconstruction method is proposed based on a correlation analysis of the envelope spectra. The reconstructed signal is post-processed by using an adaptive filter to enhance impulsive signatures, where the filter length is optimized by the proposed sparsity analysis technique. Bearing health conditions are diagnosed by examining bearing characteristic frequency information on the envelope power spectrum. The effectiveness of the proposed fault detection technique is verified by a series of experimental tests corresponding to different bearing conditions. (paper)

  3. OFFLINE HANDWRITTEN SIGNATURE IDENTIFICATION USING ADAPTIVE WINDOW POSITIONING TECHNIQUES

    Ghazali Sulong

    2014-10-01

    Full Text Available The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning technique which focuses on not just the meaning of the handwritten signature but also on the individuality of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn (13x13. This size should be large enough to contain ample information about the style of the author and small enough to ensure a good identification performance. The process was tested with a GPDS datasetcontaining 4870 signature samples from 90 different writers by comparing the robust features of the test signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that adaptive window positioning technique proved to be the efficient and reliable method for accurate signature feature extraction for the identification of offline handwritten signatures .The contribution of this technique can be used to detect signatures signed under emotional duress.

  4. Adapted Cuing Technique for Use in Treatment of Dyspraxia.

    Klick, Susan L.

    1985-01-01

    The Adapted Cuing Technique (ACT) was created to accompany oral stimulus presentation in treatment of dyspraxia. ACT is consistent with current treatment theory, emphasizing patterns of articulatory movement, manner of production, and multimodality facilitation. A case study describes the use of ACT in the treatment of a five-year-old child.…

  5. Adaptive Landmark-Based Navigation System Using Learning Techniques

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin;

    2014-01-01

    . Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  6. Parameter Identification and Adaptive Control Applied to the Inverted Pendulum

    Carlos A. Saldarriaga-Cortés; Víctor D. Correa-Ramírez; Didier Giraldo-Buitrago

    2012-01-01

    This paper presents a methodology to implement an adaptive control of the inverted pendulum system; which uses the recursive square minimum method for the identification of a dynamic digital model of the plant and then, with its estimated parameters, tune in real time a pole placement control. The plant to be used is an unstable and nonlinear system. This fact, combined with the adaptive controller characteristics, allows the obtained results to be extended to a great variety of systems. The ...

  7. Object oriented programming techniques applied to device access and control

    In this paper a model, called the device server model, has been presented for solving the problem of device access and control faced by all control systems. Object Oriented Programming techniques were used to achieve a powerful yet flexible solution. The model provides a solution to the problem which hides device dependancies. It defines a software framework which has to be respected by implementors of device classes - this is very useful for developing groupware. The decision to implement remote access in the root class means that device servers can be easily integrated in a distributed control system. A lot of the advantages and features of the device server model are due to the adoption of OOP techniques. The main conclusion that can be drawn from this paper is that 1. the device access and control problem is adapted to being solved with OOP techniques, 2. OOP techniques offer a distinct advantage over traditional programming techniques for solving the device access problem. (J.P.N.)

  8. Adaptive feedback linearization applied to steering of ships

    Thor I. Fossen

    1993-10-01

    Full Text Available This paper describes the application of feedback linearization to automatic steering of ships. The flexibility of the design procedure allows the autopilot to be optimized for both course-keeping and course-changing manoeuvres. Direct adaptive versions of both the course-keeping and turning controller are derived. The advantages of the adaptive controllers are improved performance and reduced fuel consumption. The application of nonlinear control theory also allows the designer in a systematic manner to compensate for nonlinearities in the control design.

  9. Sensor Data Qualification Technique Applied to Gas Turbine Engines

    Csank, Jeffrey T.; Simon, Donald L.

    2013-01-01

    This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.

  10. Technique applied in electrical power distribution for Satellite Launch Vehicle

    João Maurício Rosário

    2010-09-01

    Full Text Available The Satellite Launch Vehicle electrical network, which is currently being developed in Brazil, is sub-divided for analysis in the following parts: Service Electrical Network, Controlling Electrical Network, Safety Electrical Network and Telemetry Electrical Network. During the pre-launching and launching phases, these electrical networks are associated electrically and mechanically to the structure of the vehicle. In order to succeed in the integration of these electrical networks it is necessary to employ techniques of electrical power distribution, which are proper to Launch Vehicle systems. This work presents the most important techniques to be considered in the characterization of the electrical power supply applied to Launch Vehicle systems. Such techniques are primarily designed to allow the electrical networks, when submitted to the single-phase fault to ground, to be able of keeping the power supply to the loads.

  11. Strategies in edge plasma simulation using adaptive dynamic nodalization techniques

    A wide span of steady-state and transient edge plasma processes simulation problems require accurate discretization techniques and can then be treated with Finite Element (FE) and Finite Volume (FV) methods. The software used here to meet these meshing requirements is a 2D finite element grid generator. It allows to produce adaptive unstructured grids taking into consideration the flux surface characteristics. To comply with the common mesh handling features of FE/FV packages, some options have been added to the basic generation tool. These enhancements include quadrilateral meshes without non-regular transition elements obtained by substituting them by transition constructions consisting of regular quadrilateral elements. Furthermore triangular grids can be created with one edge parallel to the magnetic field and modified by the basic adaptation/realignment techniques. Enhanced code operation properties and processing capabilities are expected. (author)

  12. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    Bonny, Talal

    2012-07-28

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.

  13. Adaptive spectral identification techniques in presence of undetected non linearities

    Cella, G; Guidi, G M

    2002-01-01

    The standard procedure for detection of gravitational wave coalescing binaries signals is based on Wiener filtering with an appropriate bank of template filters. This is the optimal procedure in the hypothesis of addictive Gaussian and stationary noise. We study the possibility of improving the detection efficiency with a class of adaptive spectral identification techniques, analyzing their effect in presence of non stationarities and undetected non linearities in the noise

  14. Load Cell Response Correction Using Analog Adaptive Techniques

    Jafaripanah, Mehdi; Al-Hashimi, Bashir; White, Neil M.

    2003-01-01

    Load cell response correction can be used to speed up the process of measurement. This paper investigates the application of analog adaptive techniques in load cell response correction. The load cell is a sensor with an oscillatory output in which the measurand contributes to response parameters. Thus, a compensation filter needs to track variation in measurand whereas a simple, fixed filter is only valid at one load value. To facilitate this investigation, computer models for the load cell a...

  15. Highly charged ion beam applied to lithography technique.

    Momota, Sadao; Nojiri, Yoichi; Taniguchi, Jun; Miyamoto, Iwao; Morita, Noboru; Kawasegi, Noritaka

    2008-02-01

    In various fields of nanotechnology, the importance of nanoscale three-dimensional (3D) structures is increasing. In order to develop an efficient process to fabricate nanoscale 3D structures, we have applied highly charged ion (HCI) beams to the ion-beam lithography (IBL) technique. Ar-ion beams with various charge states (1+ to 9+) were applied to fabricate spin on glass (SOG) and Si by means of the IBL technique. The Ar ions were prepared by a facility built at Kochi University of Technology, which includes an electron cyclotron resonance ion source (NANOGAN, 10 GHz). IBL fabrication was performed as a function of not only the charge state but also the energy and the dose of Ar ions. The present results show that the application of an Ar(9+) beam reduces the etching time for SOG and enhances the etching depth compared with those observed with Ar ions in lower charged states. Considering the high-energy deposition of HCI at a surface, the former phenomena can be understood consistently. Also, the latter phenomena can be understood based on anomalously deep structural changes, which are remarkable for glasses. Furthermore, it has also been shown that the etching depth can be easily controlled with the kinetic energy of the Ar ions. These results show the possibilities of the IBL technique with HCI beams in the field of nanoscale 3D fabrication. PMID:18315242

  16. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  17. Development and verification of unstructured adaptive mesh technique with edge compatibility

    In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells. (author)

  18. Inexpensive rf modeling and analysis techniques as applied to cyclotrons

    A review and expansion of the circuit analogy method of modeling and analysing multiconductor TEM mode rf resonators is described. This method was used to predict the performance of the NSCL K500 and K1200 cyclotron resonators and the results compared well to the measured performance. The method is currently being applied as the initial stage of the design process to optimize the performance of the rf resonators for a proposed K250 cyclotron for medical applications. Although this technique requires an experienced rf modeller, the input files tend to be simple and small, the software is very inexpensive or free, and the computer runtimes are nearly instantaneous

  19. Three-dimensional integrated CAE system applying computer graphic technique

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  20. An adaptive technique for a redundant-sensor navigation system.

    Chien, T.-T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. This adaptive system is structured as a multistage stochastic process of detection, identification, and compensation. It is shown that the detection system can be effectively constructed on the basis of a design value, specified by mission requirements, of the unknown parameter in the actual system, and of a degradation mode in the form of a constant bias jump. A suboptimal detection system on the basis of Wald's sequential analysis is developed using the concept of information value and information feedback. The developed system is easily implemented, and demonstrates a performance remarkably close to that of the optimal nonlinear detection system. An invariant transformation is derived to eliminate the effect of nuisance parameters such that the ambiguous identification system can be reduced to a set of disjoint simple hypotheses tests. By application of a technique of decoupled bias estimation in the compensation system the adaptive system can be operated without any complicated reorganization.

  1. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  2. Experiments on Adaptive Techniques for Host-Based Intrusion Detection

    This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment

  3. Statistical mapping techniques applied to radio-immunoscintigraphy

    Full text: Serial image analysis techniques such as kinetic analysis with probability mapping have been successfully applied to radio-immunoscintigraphic localization of occult tumours. Lesion detection by these statistical methods is predicated on a decrease in vascular activity over time in comparison with incremental increase in tumour uptake of radiolabelled antibody on serial images. We have refined the kinetic analysis technique by introduction of weighted error determination and correlation with regional masking for application to serial SPET images as well as planar studies. Six patients undergoing radioimmunoscintigraphy for localization of radiographically occult recurrence or metastases of colon cancer were imaged within 30 min and at 3, 6 and 24 h following intravenous administration of 99Tcm-anti-CEA antibody (CEA Scan Immunomedics). Statistical mapping comprising setting of correlation parameters, subtraction of correlated images and visualization and analysis of statistical maps was performed. We found that changing weights in least square correlation improved delineation of target from background activity. The introduction of regional masking to compensate for the changing pattern of activity in the kidneys and bladder also facilitated correlation of serial images. These statistical mapping techniques were applied to SPET images with CT co-registration for accurate anatomical localization of lesions. The probability of CEA-secreting tumour recurrence or metastasis was expressed as two levels of confidence set arbitrarily as 0.05 (1.96 S.D.) and 0.001 (3.291 S.D.) in respect of CT co-registered SPET 3 and 6 h images of thorax, abdomen and pelvis

  4. Hyperspectral-imaging-based techniques applied to wheat kernels characterization

    Serranti, Silvia; Cesare, Daniela; Bonifazi, Giuseppe

    2012-05-01

    Single kernels of durum wheat have been analyzed by hyperspectral imaging (HSI). Such an approach is based on the utilization of an integrated hardware and software architecture able to digitally capture and handle spectra as an image sequence, as they results along a pre-defined alignment on a surface sample properly energized. The study was addressed to investigate the possibility to apply HSI techniques for classification of different types of wheat kernels: vitreous, yellow berry and fusarium-damaged. Reflectance spectra of selected wheat kernels of the three typologies have been acquired by a laboratory device equipped with an HSI system working in near infrared field (1000-1700 nm). The hypercubes were analyzed applying principal component analysis (PCA) to reduce the high dimensionality of data and for selecting some effective wavelengths. Partial least squares discriminant analysis (PLS-DA) was applied for classification of the three wheat typologies. The study demonstrated that good classification results were obtained not only considering the entire investigated wavelength range, but also selecting only four optimal wavelengths (1104, 1384, 1454 and 1650 nm) out of 121. The developed procedures based on HSI can be utilized for quality control purposes or for the definition of innovative sorting logics of wheat.

  5. Automation of assertion testing - Grid and adaptive techniques

    Andrews, D. M.

    1985-01-01

    Assertions can be used to automate the process of testing software. Two methods for automating the generation of input test data are described in this paper. One method selects the input values of variables at regular intervals in a 'grid'. The other, adaptive testing, uses assertion violations as a measure of errors detected and generates new test cases based on test results. The important features of assertion testing are that: it can be used throughout the entire testing cycle; it provides automatic notification of error conditions; and it can be used with automatic input generation techniques which eliminate the subjectivity in choosing test data.

  6. Active Learning Techniques Applied to an Interdisciplinary Mineral Resources Course.

    Aird, H. M.

    2015-12-01

    An interdisciplinary active learning course was introduced at the University of Puget Sound entitled 'Mineral Resources and the Environment'. Various formative assessment and active learning techniques that have been effective in other courses were adapted and implemented to improve student learning, increase retention and broaden knowledge and understanding of course material. This was an elective course targeted towards upper-level undergraduate geology and environmental majors. The course provided an introduction to the mineral resources industry, discussing geological, environmental, societal and economic aspects, legislation and the processes involved in exploration, extraction, processing, reclamation/remediation and recycling of products. Lectures and associated weekly labs were linked in subject matter; relevant readings from the recent scientific literature were assigned and discussed in the second lecture of the week. Peer-based learning was facilitated through weekly reading assignments with peer-led discussions and through group research projects, in addition to in-class exercises such as debates. Writing and research skills were developed through student groups designing, carrying out and reporting on their own semester-long research projects around the lasting effects of the historical Ruston Smelter on the biology and water systems of Tacoma. The writing of their mini grant proposals and final project reports was carried out in stages to allow for feedback before the deadline. Speakers from industry were invited to share their specialist knowledge as guest lecturers, and students were encouraged to interact with them, with a view to employment opportunities. Formative assessment techniques included jigsaw exercises, gallery walks, placemat surveys, think pair share and take-home point summaries. Summative assessment included discussion leadership, exams, homeworks, group projects, in-class exercises, field trips, and pre-discussion reading exercises

  7. Conceptualizing urban adaptation to climate change: Findings from an applied adaptation assessment framework

    Johnson, Katie; BREIL, MARGARETHA

    2012-01-01

    Urban areas have particular sensitivities to climate change, and therefore adaptation to a warming planet represents a challenging new issue for urban policy makers in both the developed and developing world. Further to climate mitigation strategies implemented in various cities over the past 20 years, more recent efforts of urban management have also included actions taken to adapt to increasing temperatures, sea level and extreme events. Through the examination and comparison of seven citie...

  8. Applying machine learning classification techniques to automate sky object cataloguing

    Fayyad, Usama M.; Doyle, Richard J.; Weir, W. Nick; Djorgovski, Stanislav

    1993-08-01

    We describe the application of an Artificial Intelligence machine learning techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Mt. Palomar Northern Sky Survey is nearly completed. This survey provides comprehensive coverage of the northern celestial hemisphere in the form of photographic plates. The plates are being transformed into digitized images whose quality will probably not be surpassed in the next ten to twenty years. The images are expected to contain on the order of 107 galaxies and 108 stars. Astronomers wish to determine which of these sky objects belong to various classes of galaxies and stars. Unfortunately, the size of this data set precludes analysis in an exclusively manual fashion. Our approach is to develop a software system which integrates the functions of independently developed techniques for image processing and data classification. Digitized sky images are passed through image processing routines to identify sky objects and to extract a set of features for each object. These routines are used to help select a useful set of attributes for classifying sky objects. Then GID3 (Generalized ID3) and O-B Tree, two inductive learning techniques, learns classification decision trees from examples. These classifiers will then be applied to new data. These developmnent process is highly interactive, with astronomer input playing a vital role. Astronomers refine the feature set used to construct sky object descriptions, and evaluate the performance of the automated classification technique on new data. This paper gives an overview of the machine learning techniques with an emphasis on their general applicability, describes the details of our specific application, and reports the initial encouraging results. The results indicate that our machine learning approach is well-suited to the problem. The primary benefit of the approach is increased data reduction throughput. Another benefit is

  9. Adaptive Communication Techniques for the Internet of Things

    Peng Du

    2013-03-01

    Full Text Available The vision for the Internet of Things (IoT demands that material objects acquire communications and computation capabilities and become able to automatically identify themselves through standard protocols and open systems, using the Internet as their foundation. Yet, several challenges still must be addressed for this vision to become a reality. A core ingredient in such development is the ability of heterogeneous devices to communicate adaptively so as to make the best of limited spectrum availability and cope with competition which is inevitable as more and more objects connect to the system. This survey provides an overview of current developments in this area, placing emphasis on wireless sensor networks that can provide IoT capabilities for material objects and techniques that can be used in the context of systems employing low-power versions of the Internet Protocol (IP stack. The survey introduces a conceptual model that facilitates the identification of opportunities for adaptation in each layer of the network stack. After a detailed discussion of specific approaches applicable to particular layers, we consider how sharing information across layers can facilitate further adaptation. We conclude with a discussion of future research directions.

  10. A New Local Adaptive Thresholding Technique in Binarization

    Singh, T Romen; Singh, O Imocha; Sinam, Tejmani; Singh, Kh Manglem

    2012-01-01

    Image binarization is the process of separation of pixel values into two groups, white as background and black as foreground. Thresholding plays a major in binarization of images. Thresholding can be categorized into global thresholding and local thresholding. In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate. In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background. In such cases, binarization with local thresholding is more appropriate. This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation. Normally the local mean computational time depends on the window size. Our technique uses integral sum image as a prior processing to calculate local mean. It does not involve calculations of standard deviations as in other ...

  11. Image analysis technique applied to lock-exchange gravity currents

    Nogueira, Helena I. S.; Adduce, Claudia; Alves, Elsa; Franca, Mário J.

    2013-04-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in the image relating the amount of dye uniformly distributed in the tank and the greyscale values in the corresponding images. The results are evaluated and corrected by applying the mass conservation principle within the experimental tank. The procedure is a simple way to assess the time-varying density distribution within the gravity current, allowing the investigation of gravity current dynamics and mixing processes.

  12. A Competency-Based Guided-Learning Algorithm Applied on Adaptively Guiding E-Learning

    Hsu, Wei-Chih; Li, Cheng-Hsiu

    2015-01-01

    This paper presents a new algorithm called competency-based guided-learning algorithm (CBGLA), which can be applied on adaptively guiding e-learning. Computational process analysis and mathematical derivation of competency-based learning (CBL) were used to develop the CBGLA. The proposed algorithm could generate an effective adaptively guiding…

  13. Three-dimensional region-based adaptive image processing techniques for volume visualization applications

    de Deus Lopes, Roseli; Zuffo, Marcelo K.; Rangayyan, Rangaraj M.

    1996-04-01

    Recent advances in three-dimensional (3D) imaging techniques have expanded the scope of applications of volume visualization to many areas such as medical imaging, scientific visualization, robotic vision, and virtual reality. Advanced image filtering, enhancement, and analysis techniques are being developed in parallel in the field of digital image processing. Although the fields cited have many aspects in common, it appears that many of the latest developments in image processing are not being applied to the fullest extent possible in visualization. It is common to encounter the use of rather simple and elementary image pre- processing operations being used in visualization and 3D imaging applications. The purpose of this paper is to present an overview of selected topics from recent developments in adaptive image processing and demonstrate or suggest their applications in volume visualization. The techniques include adaptive noise removal; improvement of contrast and visibility of objects; space-variant deblurring and restoration; segmentation-based lossless coding for data compression; and perception-based measures for analysis, enhancement, and rendering. The techniques share the common base of identification of adaptive regions by region growing, which lends them a perceptual basis related to the human visual system. Preliminary results obtained with some of the techniques implemented so far are used to illustrate the concepts involved, and to indicate potential performance capabilities of the methods.

  14. Dust tracking techniques applied to the STARDUST facility: First results

    Highlights: •Use of an experimental facility, STARDUST, to analyze the dust resuspension problem inside the tokamak in case of loss of vacuum accident. •PIV technique implementation to track the dust during a LOVA reproduction inside STARDUST. •Data imaging techniques to analyze dust velocity field: first results and data discussion. -- Abstract: An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values

  15. Dust tracking techniques applied to the STARDUST facility: First results

    Malizia, A., E-mail: malizia@ing.uniroma2.it [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Camplani, M. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Gelfusa, M. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Lupelli, I. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); EURATOM/CCFE Association, Culham Science Centre, Abingdon (United Kingdom); Richetta, M.; Antonelli, L.; Conetta, F.; Scarpellini, D.; Carestia, M.; Peluso, E.; Bellecci, C. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Salgado, L. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Video Processing and Understanding Laboratory, Universidad Autónoma de Madrid (Spain); Gaudio, P. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy)

    2014-10-15

    Highlights: •Use of an experimental facility, STARDUST, to analyze the dust resuspension problem inside the tokamak in case of loss of vacuum accident. •PIV technique implementation to track the dust during a LOVA reproduction inside STARDUST. •Data imaging techniques to analyze dust velocity field: first results and data discussion. -- Abstract: An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values

  16. Analytical techniques applied to study cultural heritage objects

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N., E-mail: rizzutto@if.usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica

    2015-07-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  17. Analytical techniques applied to study cultural heritage objects

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  18. Two fiber optics communication adapters apply to the control system of HIRFL-CSR

    The authors introduced two kinds of fiber adapters that apply to the engineering HIRFL-CSR. Including design of two adapters, operational principle, and hardware construction, field of application. How to control equipment which have the standard RS232 or RS485 interface at long distance by two adapters. Replace the RS485 bus with the fiber and the 485-Fiber Adapter, solved the problem of communication disturb. The requirements of control in the national great science engineering HIRFL-CSR are fulfilled. (authors)

  19. Applying advanced digital signal processing techniques in industrial radioisotopes applications

    Radioisotopes can be used to obtain signals or images in order to recognize the information inside the industrial systems. The main problems of using these techniques are the difficulty of identification of the obtained signals or images and the requirement of skilled experts for the interpretation process of the output data of these applications. Now, the interpretation of the output data from these applications is performed mainly manually, depending heavily on the skills and the experience of trained operators. This process is time consuming and the results typically suffer from inconsistency and errors. The objective of the thesis is to apply the advanced digital signal processing techniques for improving the treatment and the interpretation of the output data from the different Industrial Radioisotopes Applications (IRA). This thesis focuses on two IRA; the Residence Time Distribution (RTD) measurement and the defect inspection of welded pipes using a gamma source (gamma radiography). In RTD measurement application, this thesis presents methods for signal pre-processing and modeling of the RTD signals. Simulation results have been presented for two case studies. The first case study is a laboratory experiment for measuring the RTD in a water flow rig. The second case study is an experiment for measuring the RTD in a phosphate production unit. The thesis proposes an approach for RTD signal identification in the presence of noise. In this approach, after signal processing, the Mel Frequency Cepstral Coefficients (MFCCs) and polynomial coefficients are extracted from the processed signal or from one of its transforms. The Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT), and Discrete Sine Transform (DST) have been tested and compared for efficient feature extraction. Neural networks have been used for matching of the extracted features. Furthermore, the Power Density Spectrum (PDS) of the RTD signal has been also used instead of the discrete

  20. ESR dating technique applied to Pleistocene Corals (Barbados Island)

    In this work we applied the ESR (Electron Spin Resonance) dating technique to a coral coming from Barbados island. After a preliminary purification treatment, coral samples were milled and separated in different granulometry groups. Powder samples having granulometry values between 125 μm-250 μm and 250 μm-500 μm were irradiated at the Calliope60 Co radioisotope source (R.C. ENEA-Casaccia) at doses between 10-3300 Gγ and their radiation induced ESR signals were measured by a Bruker EMS1O4 spectrometer. The signal/noise ratio turned to be highest far the granulometry between 250 μm-500 μm and consequently the paleo-curve was constructed by using the ESR signals related to this granulometry value. The paleo-curve was fitted with the exponential growth function y = a - b · e-cx which well describes the behaviour of the curve also in the saturation region. Extrapolating the paleo-dose and knowing the annual dose (999±79 μGy/y) we calculated a coral age of 156±12 ky, which is in good agreement with results obtained on coral coming from the same region by other authors

  1. Sensor Web Dynamic Measurement Techniques and Adaptive Observing Strategies

    Talabac, Stephen J.

    2004-01-01

    Sensor Web observing systems may have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable environmental features and events. This improvement will come about by integrating novel data collection techniques, new or improved instruments, emerging communications technologies and protocols, sensor mark-up languages, and interoperable planning and scheduling systems. In contrast to today's observing systems, "event-driven" sensor webs will synthesize real- or near-real time measurements and information from other platforms and then react by reconfiguring the platforms and instruments to invoke new measurement modes and adaptive observation strategies. Similarly, "model-driven" sensor webs will utilize environmental prediction models to initiate targeted sensor measurements or to use a new observing strategy. The sensor web concept contrasts with today's data collection techniques and observing system operations concepts where independent measurements are made by remote sensing and in situ platforms that do not share, and therefore cannot act upon, potentially useful complementary sensor measurement data and platform state information. This presentation describes NASA's view of event-driven and model-driven Sensor Webs and highlights several research and development activities at the Goddard Space Flight Center.

  2. Adaptive array technique for differential-phase reflectometry in QUEST

    Idei, H., E-mail: idei@triam.kyushu-u.ac.jp; Hanada, K.; Zushi, H. [Research Institute for Applied Mechanics, Kyushu Univ., Kasuga, 816-8560 Japan (Japan); Nagata, K.; Mishra, K.; Itado, T.; Akimoto, R. [Interdisciplinary Grad. School of Eng. Sci., Kyushu Univ., Kasuga, 816-8580 Japan (Japan); Yamamoto, M. K. [Research Institute for Sustainable Humanosphere, Kyoto Univ., Uji, 611-0011 Japan (Japan)

    2014-11-15

    A Phased Array Antenna (PAA) was considered as launching and receiving antennae in reflectometry to attain good directivity in its applied microwave range. A well-focused beam was obtained in a launching antenna application, and differential-phase evolution was properly measured by using a metal reflector plate in the proof-of-principle experiment at low power test facilities. Differential-phase evolution was also evaluated by using the PAA in the Q-shu University Experiment with Steady State Spherical Tokamak (QUEST). A beam-forming technique was applied in receiving phased-array antenna measurements. In the QUEST device that should be considered as a large oversized cavity, standing wave effect was significantly observed with perturbed phase evolution. A new approach using derivative of measured field on propagating wavenumber was proposed to eliminate the standing wave effect.

  3. PIXE and PDMS techniques applied to environmental control

    The airborne particles containing metals are the one of main sources of workers and environmental exposure during mineral mining and milling processes. In order to evaluate the risk of the workers and to the environment due to mineral processes, it is necessary to determine, the concentration and the kinetics of the particles. Furthermore, the chemical composition, particle size and the elemental mass concentration in the fine fraction of aerosol are necessary for evaluation of the risk. Mineral sands are processed to obtain rutile (TiO2), ilmenite (TiFeO3), zircon (ZrSiO4 and monazite (RE3(PO4)) concentrates. The aim of this work was to apply PIXE (Particle Induced X ray Emission) and PDMS (Plasma Desorption Mass Spectrometry) methods to characterize mineral dust particles generated at this plant. The mass spectrum of positive ions of cerium oxide shows that the cerium is associated to oxygen (CeOn). Compounds of thorium (ThO2) and (ThSiO4), Sr, Ca, Zr were also observed in this spectrum. The positive ions mass spectrum of the concentrate of monazite shows that Th was associated to oxygen (ThOn) and Ce was associated to (POn). Also shows compounds of other rare earth as La, Nd and Y. Ions of ZrSiO3, TiO2 and TiFeO3 present in the mass spectra indicate that the concentrate of monazite contains zircon, rutile and ilmenite. Compounds of Cl, Ca, Mn, V, Cu, Zn and Pb also were identified in the mass spectrum. This study shows that PIXE and PDMS techniques can be used as complementary methods for the aerosol analysis. (author)

  4. Indirect techniques for adaptive input-output linearization of non-linear systems

    Teel, Andrew; Kadiyala, Raja; Kokotovic, Peter; Sastry, Shankar

    1991-01-01

    A technique of indirect adaptive control based on certainty equivalence for input output linearization of nonlinear systems is proven convergent. It does not suffer from the overparameterization drawbacks of the direct adaptive control techniques on the same plant. This paper also contains a semiindirect adaptive controller which has several attractive features of both the direct and indirect schemes.

  5. Applied Taxonomy Techniques Intended for Strenuous Random Forest Robustness

    Tarannum A. Bloch

    2011-11-01

    Full Text Available Globalization and economic trade has change the scrutiny of facts from data to knowledge. For the same purpose data mining techniques have been involved in copious real world applications. This paper illustrates appraisal of assorted data mining techniques on diverse data sets. There are scores of data mining techniques for prediction and classification obtainable, this article includes most prominent techniques: J48, random forest, Naïve Bayes, AdaBoostM1 and Bagging. Experiment results prove robustness of random forest classifier by conniving accuracy, weighted average value of ROC and kappa statistics of various data sets

  6. GLOBAL COMMUNICATION TECHNIQUES TO BE APPLIED BY MULTINATIONAL COMPANIES

    Alexandru Ionescu; Nicoleta Rossela Dumitru

    2011-01-01

    Global communication is based on a basic principle very clear: in a company, everything communicates. Each expression of communication should be considered as a vital element of enterprise identity and personality. Also, global communication is developed based company’s history and heritage, culture and future. Being rooted in each project’s ambition, the global communication identifies and integrates the core values that will allow the company to grow and adapt to fast environmental changes....

  7. Multicriterial Evaluation of Applying Japanese Management Concepts, Methods and Techniques

    Podobiński, Mateusz

    2014-01-01

    Japanese management concepts, methods and techniques refer to work organization and improvements to companies’ functioning. They appear in numerous Polish companies, especially in the manufacturing ones. Cultural differences are a major impediment in their implementation. Nevertheless, the advantages of using Japanese management concepts, methods and techniques motivate the management to implement them in the company. The author shows research results, which refer to advanta...

  8. Photoacoustic technique applied to the study of skin and leather

    In this paper the photoacoustic technique is used in bull skin for the determination of thermal and optical properties as a function of the tanning process steps. Our results show that the photoacoustic technique is sensitive to the study of physical changes in this kind of material due to the tanning process

  9. Photoacoustic technique applied to the study of skin and leather

    Vargas, M.; Varela, J.; Hernández, L.; González, A.

    1998-08-01

    In this paper the photoacoustic technique is used in bull skin for the determination of thermal and optical properties as a function of the tanning process steps. Our results show that the photoacoustic technique is sensitive to the study of physical changes in this kind of material due to the tanning process.

  10. GPS-based ionospheric tomography with a constrained adaptive simultaneous algebraic reconstruction technique

    Wen Debao; Zhang Xiao; Tong Yangjin; Zhang Guangsheng; Zhang Min; Leng Rusong

    2015-03-01

    In this paper, a constrained adaptive simultaneous algebraic reconstruction technique (CASART) is presented to obtain high-quality reconstructions from insufficient projections. According to the continuous smoothness of the variations of ionospheric electron density (IED) among neighbouring voxels, Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the dependence on the initial values for those voxels without any GPS rays traversing them. Numerical simulation scheme is devised to validate the feasibility of the new algorithm. Some comparisons are made to demonstrate the superiority of the new method. Finally, the actual GPS observations are applied to further validate the feasibility and superiority of the new algorithm.

  11. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  12. Manifold learning techniques and model reduction applied to dissipative PDEs

    Sonday, Benjamin E.; Singer, Amit; Gear, C. William; Kevrekidis, Ioannis G.

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relati...

  13. Metamodeling Techniques Applied to the Design of Reconfigurable Control Applications

    Luca Ferrarini

    2008-02-01

    Full Text Available In order to realize autonomous manufacturing systems in environments characterized by high dynamics and high complexity of task, it is necessary to improve the control system modelling and performance. This requires the use of better and reusable abstractions. In this paper, we explore the metamodel techniques as a foundation to the solution of this problem. The increasing popularity of model-driven approaches and a new generation of tools to support metamodel techniques are changing software engineering landscape, boosting the adoption of new methodologies for control application development.

  14. Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems

    Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul

    2009-01-01

    Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…

  15. Flipped Classroom Adapted to the ARCS Model of Motivation and Applied to a Physics Course

    Asiksoy, Gülsüm; Özdamli, Fezile

    2016-01-01

    This study aims to determine the effect on the achievement, motivation and self-sufficiency of students of the flipped classroom approach adapted to Keller's ARCS (Attention, Relevance, Confidence and Satisfaction) motivation model and applied to a physics course. The study involved 66 students divided into two classes of a physics course. The…

  16. Software factory techniques applied to Process Control at CERN

    Dutour, MD

    2007-01-01

    The CERN Large Hadron Collider (LHC) requires constant monitoring and control of quantities of parameters to guarantee operational conditions. For this purpose, a methodology called UNICOS (UNIfied Industrial COntrols Systems) has been implemented to standardize the design of process control applications. To further accelerate the development of these applications, we migrated our existing UNICOS tooling suite toward a software factory in charge of assembling project, domain and technical information seamlessly into deployable PLC (Programmable logic Controller) – SCADA (Supervisory Control And Data Acquisition) systems. This software factory delivers consistently high quality by reducing human error and repetitive tasks, and adapts to user specifications in a cost-efficient way. Hence, this production tool is designed to encapsulate and hide the PLC and SCADA target platforms, enabling the experts to focus on the business model rather than specific syntaxes and grammars. Based on industry standard software...

  17. Flash radiographic technique applied to fuel injector sprays

    A flash radiographic technique, using 50 ns exposure times, was used to study the pattern and density distribution of a fuel injector spray. The experimental apparatus and method are described. An 85 kVp flash x-ray generator, designed and fabricated at the Lawrence Livermore Laboratory, is utilized. Radiographic images, recorded on standard x-ray films, are digitized and computer processed

  18. X-diffraction technique applied for nano system metrology

    The application of nano materials are fast growing in all industrial sectors, with a strong necessity in nano metrology and normalizing in the nano material area. The great potential of the X-ray diffraction technique in this field is illustrated at the example of metals, metal oxides and pharmaceuticals

  19. A practical application and implementation of adaptive techniques using neural networks in autoreclose protection and system control

    Gardiner, I.P.

    1997-12-31

    Reyrolle Protection have carried out research in conjunction with Bath University into applying adaptive techniques to autoreclose schemes and have produced an algorithm based on an artificial neural network which can recognise when it is ``safe to reclose`` and when it is ``unsafe to reclose``. This algorithm is based on examination of the induced voltage on the faulted phase and by applying pattern recognition techniques determines when the secondary arc extinguishes. Significant operational advantages can now be realised using this technology resulting in changes to existing operational philosophy. Conventional autoreclose relays applied to the system have followed the philosophy of ``reclose to restore the system``, but a progression from this philosophy to ``reclose only if safe to do so`` can now be made using this adaptive approach. With this adaptive technique the main requirement remains to protect the investment i.e. the system, by reducing damaging shocks and voltage dips and maintaining continuity of supply. The adaptive technique can be incorporated into a variety of schemes which will act to further this goal in comparison with conventional autoreclose. (Author)

  20. Technology Assessment of Dust Suppression Techniques Applied During Structural Demolition

    Boudreaux, J.F.; Ebadian, M.A.; Williams, P.T.; Dua, S.K.

    1998-10-20

    Hanford, Fernald, Savannah River, and other sites are currently reviewing technologies that can be implemented to demolish buildings in a cost-effective manner. In order to demolish a structure properly and, at the same time, minimize the amount of dust generated from a given technology, an evaluation must be conducted to choose the most appropriate dust suppression technology given site-specific conditions. Thus, the purpose of this research, which was carried out at the Hemispheric Center for Environmental Technology (HCET) at Florida International University, was to conduct an experimental study of dust aerosol abatement (dust suppression) methods as applied to nuclear D and D. This experimental study targeted the problem of dust suppression during the demolition of nuclear facilities. The resulting data were employed to assist in the development of mathematical correlations that can be applied to predict dust generation during structural demolition.

  1. Traffic visualization - applying information visualization techniques to enhance traffic planning

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti; Vatjus-Anttila, Jarkko; Quigley, Aaron John

    2013-01-01

    In this paper, we present a space-time visualization to provide city’s decision-makers the ability to analyse and uncover important “city events” in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city...

  2. Ion beam analysis techniques applied to large scale pollution studies

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 μm particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs

  3. Applying Website Usability Testing Techniques to Promote E-services

    Abdel Nasser H. Zaied; Hassan, Mohamed M.; Islam S. Mohamed

    2015-01-01

    In this competitive world, websites are considered to be a key aspect of any organization’s competitiveness. In addition to visual esthetics, usability of a website is a strong determinant for user’s satisfaction and pleasure. However, lack of appropriate techniques and attributes for measuring usability may constrain the usefulness of a website. To address this issue, we conduct a statistical study to evaluate the usability levels of e-learning and e-training websites based on human (user) p...

  4. Signal Processing Techniques Applied in RFI Mitigation of Radio Astronomy

    Sixiu Wang; Zhengwen Sun; Weixia Wang; Liangquan Jia

    2012-01-01

    Radio broadcast and telecommunications are present at different power levels everywhere on Earth. Radio Frequency Interference (RFI) substantially limits the sensitivity of existing radio telescopes in several frequency bands and may prove to be an even greater obstacle for next generation of telescopes (or arrays) to overcome. A variety of RFI detection and mitigation techniques have been developed in recent years. This study describes various signal process methods of RFI mitigation in radi...

  5. Applying a Splitting Technique to Estimate Electrical Grid Reliability

    Wadman, Wander; Crommelin, Daan; Frank, Jason; Pasupathy, R.; Kim, S.-H.; Tolk, A.; Hill, R; Kuhl, M.E.

    2013-01-01

    As intermittent renewable energy penetrates electrical power grids more and more, assessing grid reliability is of increasing concern for grid operators. Monte Carlo simulation is a robust and popular technique to estimate indices for grid reliability, but the involved computational intensity may be too high for typical reliability analyses. We show that various reliability indices can be expressed as expectations depending on the rare event probability of a so-called power curtailment, and e...

  6. Ion beam analysis techniques applied to large scale pollution studies

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  7. Enhanced nonlinear iterative techniques applied to a nonequilibrium plasma flow

    The authors study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. They use Newton's method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. They investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, mesh sequencing, and a pseudotransient continuation technique is used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with incomplete lower-upper (ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a mesh sequencing implementation provides significant CPU savings for fine grid calculations. Performance comparisons of modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented

  8. Image analysis technique applied to lock-exchange gravity currents

    Nogueira, Helena; Adduce, Claudia; Alves, Elsa; Franca, Rodrigues Pereira Da; Jorge, Mario

    2013-01-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in th...

  9. Applying Supervised Opinion Mining Techniques on Online User Reviews

    Ion SMEUREANU

    2012-01-01

    Full Text Available In recent years, the spectacular development of web technologies, lead to an enormous quantity of user generated information in online systems. This large amount of information on web platforms make them viable for use as data sources, in applications based on opinion mining and sentiment analysis. The paper proposes an algorithm for detecting sentiments on movie user reviews, based on naive Bayes classifier. We make an analysis of the opinion mining domain, techniques used in sentiment analysis and its applicability. We implemented the proposed algorithm and we tested its performance, and suggested directions of development.

  10. Unconventional Coding Technique Applied to Multi-Level Polarization Modulation

    Rutigliano, G. G.; Betti, S.; Perrone, P.

    2016-05-01

    A new technique is proposed to improve information confidentiality in optical-fiber communications without bandwidth consumption. A pseudorandom vectorial sequence was generated by a dynamic system algorithm and used to codify a multi-level polarization modulation based on the Stokes vector. Optical-fiber birefringence, usually considered as a disturbance, was exploited to obfuscate the signal transmission. At the receiver end, the same pseudorandom sequence was generated and used to decode the multi-level polarization modulated signal. The proposed scheme, working at the physical layer, provides strong information security without introducing complex processing and thus latency.

  11. Neutron activation: an invaluable technique for teaching applied radiation

    This experiment introduces students to the important method of neutron activation. A sample of aluminium was irradiated with neutrons from an isotropic 241Am-Be source. Using γ-ray spectroscopy, two radionuclide products were identified as 27Mg and 28Al. Applying a cadmium cut-off filter and an optimum irradiation time of 45 min, the half-life of 27Mg was determined as 9.46±0.50 min. The half-life of the 28Al radionuclide was determined as 2.28±0.10 min using a polythene moderator and an optimum irradiation time of 10 min. (author)

  12. Neutrongraphy technique applied to the narcotics and terrorism enforcement

    Among the several methods of non-destructive essays that may be used for the detection of both drugs and explosives, the ones that utilize nuclear techniques have demonstrated to possess essential qualities for an efficient detection system. These techniques allow the inspection of a large quantity of samples fast, sensibly, specifically and with automatic decision, for they utilize radiation of great power of penetration. This work aims to show the neutron radiography and computed tomography potentiality for the detection of the drugs and explosives even when they are concealed by heavy materials. In the radiographic essays with thermal neutrons, samples of powder cocaine and explosives were inspected, concealed by several materials or not. The samples were irradiated during 30 minutes in the J-9 channel of the Argonauta research reactor of the IEN/CNEN in a neutron flux of 2:5 105 n/cm2.s. We used two sheets of gadolinium converter with a thickness of 25 μm each one and a Kodak Industrex A5 photographic plaque. A comparative analysis among the tomographic images experimental and simulated obtained by X-ray, fast and thermal neutron is presented. The thermal neutron tomography demonstrate to be the best. (author)

  13. Nuclear Techniques Applied For Optimizing Irrigation In Vegetable Cultivation

    Optimizing irrigation in vegetable cultivation has been carried out based on the water use efficiency (WUE) parameter. The experiment has been conducted with Chinese cabbage planted on alluvial soil using traditional furrow, and drip irrigation technique with scheduling and limitation of the amount of irrigated water estimated based on the water balance to compare to each other. Soil moisture and Evapotranspiration (ET) of the crop were controlled, respectively, using a neutron probe (NP, model PB 205, FielTech, Japan) and a meteorological station installed in the field. Calibration for the NP has been performed directly in the field based on the measurement of the count ratio (Rn) and the soil moisture determined gravimetrically. Productivity of the crop in each experiment was determined as the total biological (Ybio) and the edible yield (YE) harvested and the WUE was estimated as a ratio of the productivity and the amount of irrigated water in unit of kg.m-3. Experimental results showed that the drip irrigation could save up to 10-19% of water as compared to the furrow irrigation depending on the cultivation seasons. Thus, WUE was improved up to 1.4 times, as estimated either by YE or by Ybio productivities. The drip irrigation with scheduling technique could be transferred to semiarid areas in Vietnam for not only vegetable but also fruit, e.g. grape in the southern central part of the country. (author)

  14. Applying Data Privacy Techniques on Tabular Data in Uganda

    Mivule, Kato

    2011-01-01

    The growth of Information Technology(IT) in Africa has led to an increase in the utilization of communication networks for data transaction across the continent. A growing number of entities in the private sector, academia, and government, have deployed the Internet as a medium to transact in data, routinely posting statistical and non statistical data online and thereby making many in Africa increasingly dependent on the Internet for data transactions. In the country of Uganda, exponential growth in data transaction has presented a new challenge: What is the most efficient way to implement data privacy. This article discusses data privacy challenges faced by the country of Uganda and implementation of data privacy techniques for published tabular data. We make the case for data privacy, survey concepts of data privacy, and implementations that could be employed to provide data privacy in Uganda.

  15. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    Carlos Astua

    2014-04-01

    Full Text Available The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.

  16. Applying Business Process Mode ling Techniques : Case Study

    Bartosz Marcinkowski

    2010-12-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were implemented in practice in recent decades. Most significant of the notations include ARIS, Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contempo-rary bus iness process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, me-thodology of res earch is discussed. The following section presents selected case study results. The paper is concluded with a summary

  17. Signal Processing Techniques Applied in RFI Mitigation of Radio Astronomy

    Sixiu Wang

    2012-08-01

    Full Text Available Radio broadcast and telecommunications are present at different power levels everywhere on Earth. Radio Frequency Interference (RFI substantially limits the sensitivity of existing radio telescopes in several frequency bands and may prove to be an even greater obstacle for next generation of telescopes (or arrays to overcome. A variety of RFI detection and mitigation techniques have been developed in recent years. This study describes various signal process methods of RFI mitigation in radio astronomy, choose the method of Time-frequency domain cancellation to eliminate certain interference and effectively improve the signal to noise ratio in pulsar observations. Finally, RFI mitigation researches and implements in China radio astronomy will be also presented.

  18. Innovative Visualization Techniques applied to a Flood Scenario

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other

  19. Quantitative Portfolio Optimization Techniques Applied to the Brazilian Stock Market

    André Alves Portela Santos

    2012-09-01

    Full Text Available In this paper we assess the out-of-sample performance of two alternative quantitative portfolio optimization techniques - mean-variance and minimum variance optimization – and compare their performance with respect to a naive 1/N (or equally-weighted portfolio and also to the market portfolio given by the Ibovespa. We focus on short selling-constrained portfolios and consider alternative estimators for the covariance matrices: sample covariance matrix, RiskMetrics, and three covariance estimators proposed by Ledoit and Wolf (2003, Ledoit and Wolf (2004a and Ledoit and Wolf (2004b. Taking into account alternative portfolio re-balancing frequencies, we compute out-of-sample performance statistics which indicate that the quantitative approaches delivered improved results in terms of lower portfolio volatility and better risk-adjusted returns. Moreover, the use of more sophisticated estimators for the covariance matrix generated optimal portfolios with lower turnover over time.

  20. Considerations in applying on-line IC techniques to BWR's

    Ion-Chromatography (IC) has moved from its traditional role as a laboratory analytical tool to a real time, dynamic, on-line measurement device to follow ppb and sub-ppb concentrations of deleterious impurities in nuclear power plants. Electric Power Research Institute (EPRI), individual utilities, and industry all have played significant roles in effecting the transition. This paper highlights considerations and the evolution in current on-line Ion Chromatography systems. The first applications of on-line techniques were demonstrated by General Electric (GE) under EPRI sponsorship at Rancho Seco (1980), Calvert Cliffs, and McGuire nuclear units. The primary use was for diagnostic purposes. Today the on-line IC applications have been expanded to include process control and routine plant monitoring. Current on-line IC's are innovative in design, promote operational simplicity, are modular for simplified maintenance and repair, and use field-proven components which enhance reliability. Conductivity detection with electronic or chemical suppression and spectrometric detection techniques are intermixed in applications. Remote multi-point sample systems have addressed memory effects. Early applications measured ionic species in the part per billion range. Today reliable part per trillion measurements are common for on-line systems. Current systems are meeting the challenge of EPRI guideline requirements. Today's on-line IC's, with programmed sampling systems, monitor fluid streams throughout a power plant, supplying data that can be trended, stored and retrieved easily. The on-line IC has come of age. Many technical challenges were overcome to achieve today's IC

  1. Technology Assessment of Dust Suppression Techniques applied During Structural Demolition

    Boudreaux, J.F.; Ebadian, M.A.; Dua, S.K.

    1997-08-06

    Hanford, Fernald, Savannah River, and other sites are currently reviewing technologies that can be implemented to demolish buildings in a cost-effective manner. In order to demolish a structure and, at the same time, minimize the amount of dust generated by a given technology, an evaluation must be conducted to choose the most appropriate dust suppression technology. Thus, the purpose of this research, which was conducted by the Hemispheric Center for Environmental Technology (HCET) at Florida International University (FIU), was to perform an experimental study of dust aerosol abatement (dust suppression) methods as applied to nuclear D and D. This experimental study specifically targeted the problem of dust suppression during demolition. The resulting data were used in the development of mathematical correlations that can be applied to structural demolition. In the Fiscal Year 1996 (FY96), the effectiveness of different dust suppressing agents was investigated for different types of concrete blocks. Initial tests were conducted in a broad particle size range. In Fiscal Year 1997 (FY97), additional tests were performed in the size range in which most of the particles were detected. Since particle distribution is an important parameter for predicting deposition in various compartments of the human respiratory tract, various tests were aimed at determining the particle size distribution of the airborne dust particles. The effectiveness of dust suppressing agents for particles of various size was studied. Instead of conducting experiments on various types of blocks, it was thought prudent to carry out additional tests on blocks of the same type. Several refinements were also incorporated in the test procedures and data acquisition system used in FY96.

  2. Optical Trapping Techniques Applied to the Study of Cell Membranes

    Morss, Andrew J.

    Optical tweezers allow for manipulating micron-sized objects using pN level optical forces. In this work, we use an optical trapping setup to aid in three separate experiments, all related to the physics of the cellular membrane. In the first experiment, in conjunction with Brian Henslee, we use optical tweezers to allow for precise positioning and control of cells in suspension to evaluate the cell size dependence of electroporation. Theory predicts that all cells porate at a transmembrane potential VTMof roughly 1 V. The Schwann equation predicts that the transmembrane potential depends linearly on the cell radius r, thus predicting that cells should porate at threshold electric fields that go as 1/r. The threshold field required to induce poration is determined by applying a low voltage pulse to the cell and then applying additional pulses of greater and greater magnitude, checking for poration at each step using propidium iodide dye. We find that, contrary to expectations, cells do not porate at a constant value of the transmembrane potential but at a constant value of the electric field which we find to be 692 V/cm for K562 cells. Delivering precise dosages of nanoparticles into cells is of importance for assessing toxicity of nanoparticles or for genetic research. In the second experiment, we conduct nano-electroporation—a novel method of applying precise doses of transfection agents to cells—by using optical tweezers in conjunction with a confocal microscope to manipulate cells into contact with 100 nm wide nanochannels. This work was done in collaboration with Pouyan Boukany of Dr. Lee's group. The small cross sectional area of these nano channels means that the electric field within them is extremely large, 60 MV/m, which allows them to electrophoretically drive transfection agents into the cell. We find that nano electroporation results in excellent dose control (to within 10% in our experiments) compared to bulk electroporation. We also find that

  3. Beaconless adaptive-optics technique for HEL beam control

    Khizhnyak, Anatoliy; Markov, Vladimir

    2016-05-01

    Effective performance of forthcoming laser systems capable of power delivery on a distant target requires an adaptive optics system to correct atmospheric perturbations on the laser beam. The turbulence-induced effects are responsible for beam wobbling, wandering, and intensity scintillation, resulting in degradation of the beam quality and power density on the target. Adaptive optics methods are used to compensate for these negative effects. In its turn, operation of the AOS system requires a reference wave that can be generated by the beacon on the target. This report discusses a beaconless approach for wavefront correction with its performance based on the detection of the target-scattered light. Postprocessing of the beacon-generated light field enables retrieval and detailed characterization of the turbulence-perturbed wavefront -data that is essential to control the adaptive optics module of a high-power laser system.

  4. Adaptive Remote-Sensing Techniques Implementing Swarms of Mobile Agents

    Cameron, S.M.; Loubriel, G.M.; Rbinett, R.D. III; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1999-04-01

    This paper focuses on our recent work at Sandia National Laboratories toward engineering a physics-based swarm of mobile vehicles for distributed sensing applications. Our goal is to coordinate a sensor array that optimizes sensor coverage and multivariate signal analysis by implementing artificial intelligence and evolutionary computational techniques. These intelligent control systems integrate both globally operating decision-making systems and locally cooperative information-sharing modes using genetically-trained neural networks. Once trained, neural networks have the ability to enhance real-time operational responses to dynamical environments, such as obstacle avoidance, responding to prevailing wind patterns, and overcoming other natural obscurants or interferences (jammers). The swarm realizes a collective set of sensor neurons with simple properties incorporating interactions based on basic community rules (potential fields) and complex interconnecting functions based on various neural network architectures, Therefore, the swarm is capable of redundant heterogeneous measurements which furnishes an additional degree of robustness and fault tolerance not afforded by conventional systems, while accomplishing such cognitive tasks as generalization, error correction, pattern recognition, and sensor fission. The robotic platforms could be equipped with specialized sensor devices including transmit/receive dipole antennas, chemical or biological sniffers in combination with recognition analysis tools, communication modulators, and laser diodes. Our group has been studying the collective behavior of an autonomous, multi-agent system applied to emerging threat applications. To accomplish such tasks, research in the fields of robotics, sensor technology, and swarms are being conducted within an integrated program. Mission scenarios under consideration include ground penetrating impulse radar (GPR) for detection of under-ground structures, airborne systems, and plume

  5. Adaptive Remote-Sensing Techniques Implementing Swarms of Mobile Agents

    Asher, R.B.; Cameron, S.M.; Loubriel, G.M.; Robinett, R.D.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-11-25

    In many situations, stand-off remote-sensing and hazard-interdiction techniques over realistic operational areas are often impractical "and difficult to characterize. An alternative approach is to implement an adap- tively deployable array of sensitive agent-specific devices. Our group has been studying the collective be- havior of an autonomous, multi-agent system applied to chedbio detection and related emerging threat applications, The current physics-based models we are using coordinate a sensor array for mukivanate sig- nal optimization and coverage as re,alized by a swarm of robots or mobile vehicles. These intelligent control systems integrate'glob"ally operating decision-making systems and locally cooperative learning neural net- works to enhance re+-timp operational responses to dynarnical environments examples of which include obstacle avoidance, res~onding to prevailing wind patterns, and overcoming other natural obscurants or in- terferences. Collectively',tkensor nefirons with simple properties, interacting according to basic community rules, can accomplish complex interconnecting functions such as generalization, error correction, pattern recognition, sensor fusion, and localization. Neural nets provide a greater degree of robusmess and fault tolerance than conventional systems in that minor variations or imperfections do not impair performance. The robotic platforms would be equipped with sensor devices that perform opticaI detection of biologicais in combination with multivariate chemical analysis tools based on genetic and neural network algorithms, laser-diode LIDAR analysis, ultra-wideband short-pulsed transmitting and receiving antennas, thermal im- a:ing sensors, and optical Communication technology providing robust data throughput pathways. Mission scenarios under consideration include ground penetrating radar (GPR) for detection of underground struc- tures, airborne systems, and plume migration and mitigation. We will describe our

  6. Remote sensing techniques applied to seismic vulnerability assessment

    Juan Arranz, Jose; Torres, Yolanda; Hahgi, Azade; Gaspar-Escribano, Jorge

    2016-04-01

    Advances in remote sensing and photogrammetry techniques have increased the degree of accuracy and resolution in the record of the earth's surface. This has expanded the range of possible applications of these data. In this research, we have used these data to document the construction characteristics of the urban environment of Lorca, Spain. An exposure database has been created with the gathered information to be used in seismic vulnerability assessment. To this end, we have used data from photogrammetric flights at different periods, using both orthorectified images in the visible and infrared spectrum. Furthermore, the analysis is completed using LiDAR data. From the combination of these data, it has been possible to delineate the building footprints and characterize the constructions with attributes such as the approximate date of construction, area, type of roof and even building materials. To carry out the calculation, we have developed different algorithms to compare images from different times, segment images, classify LiDAR data, and use the infrared data in order to remove vegetation or to compute roof surfaces with height value, tilt and spectral fingerprint. In addition, the accuracy of our results has been validated with ground truth data. Keywords: LiDAR, remote sensing, seismic vulnerability, Lorca

  7. Digital prototyping technique applied for redesigning plastic products

    Pop, A.; Andrei, A.

    2015-11-01

    After products are on the market for some time, they often need to be redesigned to meet new market requirements. New products are generally derived from similar but outdated products. Redesigning a product is an important part of the production and development process. The purpose of this paper is to show that using modern technology, like Digital Prototyping in industry is an effective way to produce new products. This paper tries to demonstrate and highlight the effectiveness of the concept of Digital Prototyping, both to reduce the design time of a new product, but also the costs required for implementing this step. The results of this paper show that using Digital Prototyping techniques in designing a new product from an existing one available on the market mould offers a significantly manufacturing time and cost reduction. The ability to simulate and test a new product with modern CAD-CAM programs in all aspects of production (designing of the 3D model, simulation of the structural resistance, analysis of the injection process and beautification) offers a helpful tool for engineers. The whole process can be realised by one skilled engineer very fast and effective.

  8. Applying Website Usability Testing Techniques to Promote E-services

    Abdel Nasser H. Zaied

    2015-09-01

    Full Text Available In this competitive world, websites are considered to be a key aspect of any organization’s competitiveness. In addition to visual esthetics, usability of a website is a strong determinant for user’s satisfaction and pleasure. However, lack of appropriate techniques and attributes for measuring usability may constrain the usefulness of a website. To address this issue, we conduct a statistical study to evaluate the usability levels of e-learning and e-training websites based on human (user perception. The questionnaire is implemented as user based tool, visitors of a website can use it to evaluate the usability of the websites. The results showed that according to the students’ point view the personalization has the first important criterion for the use of the e-learning websites, while according to experts’ point view the accessibility has the first important criterion for the use of the e-learning websites. Also the result indicated that the experienced respondents have demonstrated satisfaction over the usability attributes of e-learning websites they accessed for their learning purposes; while inexperienced students have expressed their perception on the importance of the usability attributes for accessing e-learning websites. When combining and comparing both finings, based on the outcomes it is evident that, all the attributes yielded satisfaction and were felt important.

  9. Sterile insect technique applied to Queensland fruit fly

    The Sterile Insect Technique (SIT) aims to suppress or eradicate pest populations by flooding wild populations with sterile males. To control fruit fly million of flies of both sexes are mass reared at the Gosford Post-Harvest laboratory near Sydney, mixed with sawdust and fluorescent dye at the pupal stage and transported to Ansto where they are exposed to low dose of 70-75Gy of gamma radiation from a Cobalt-60 source. Following irradiation the pupae are transported to the release site in plastic sleeves then transferred to large plastic garbage bins for hatching. These bins are held at 30 deg. C. to synchronise hatching and files are released 48-72 hours after hatching begins. In most cases these bins are placed among fruit trees in the form of an 800 metre grid. This maximises survival of the emerging flies which are released on an almost daily basis. Progress of the SIT program is monitored by collecting flies from traps dotted all over the infested site. The ratio of sterile to wild flies can be detected because the sterile files are coated with the fluorescent dust which can be seen under ultra-violet light. If the SIT program is successful entomologists will trap a high proportion of sterile flies to wild flies and this should result in a clear reduction in maggot infestations. Surveillance, quarantine, and trapping activities continue for 8 or 9 months to check for any surviving pockets of infestation. If any are found the SIT program is reactivated. These programs demonstrated that SIT was an efficient and environmental friendly non-chemical control method for eradicating outbreaks or suppressing fruit fly populations in important fruit growing areas. ills

  10. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  11. Applying data mining techniques to improve diagnosis in neonatal jaundice

    Ferreira Duarte

    2012-12-01

    Full Text Available Abstract Background Hyperbilirubinemia is emerging as an increasingly common problem in newborns due to a decreasing hospital length of stay after birth. Jaundice is the most common disease of the newborn and although being benign in most cases it can lead to severe neurological consequences if poorly evaluated. In different areas of medicine, data mining has contributed to improve the results obtained with other methodologies. Hence, the aim of this study was to improve the diagnosis of neonatal jaundice with the application of data mining techniques. Methods This study followed the different phases of the Cross Industry Standard Process for Data Mining model as its methodology. This observational study was performed at the Obstetrics Department of a central hospital (Centro Hospitalar Tâmega e Sousa – EPE, from February to March of 2011. A total of 227 healthy newborn infants with 35 or more weeks of gestation were enrolled in the study. Over 70 variables were collected and analyzed. Also, transcutaneous bilirubin levels were measured from birth to hospital discharge with maximum time intervals of 8 hours between measurements, using a noninvasive bilirubinometer. Different attribute subsets were used to train and test classification models using algorithms included in Weka data mining software, such as decision trees (J48 and neural networks (multilayer perceptron. The accuracy results were compared with the traditional methods for prediction of hyperbilirubinemia. Results The application of different classification algorithms to the collected data allowed predicting subsequent hyperbilirubinemia with high accuracy. In particular, at 24 hours of life of newborns, the accuracy for the prediction of hyperbilirubinemia was 89%. The best results were obtained using the following algorithms: naive Bayes, multilayer perceptron and simple logistic. Conclusions The findings of our study sustain that, new approaches, such as data mining, may support

  12. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    Muller, Laurent Flindt; Spoorendonk, Simon

    2010-01-01

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer programming solver and its built-in feasibility heuristics is used to search a neighborhood for improving solutions. The general reoptimization approach used for repairing solutions is specifically ...

  13. Correction of respiratory motion for IMRT using aperture adaptive technique and visual guidance: A feasibility study

    Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice

  14. Adaptive Communication Techniques for the Internet of Things

    Peng Du; George Roussos

    2013-01-01

    The vision for the Internet of Things (IoT) demands that material objects acquire communications and computation capabilities and become able to automatically identify themselves through standard protocols and open systems, using the Internet as their foundation. Yet, several challenges still must be addressed for this vision to become a reality. A core ingredient in such development is the ability of heterogeneous devices to communicate adaptively so as to make the best of limited spectrum a...

  15. Techniques for valuing adaptive capacity in flood risk management

    Brisley, Rachel; Wylde, Richard; Lamb, Rob; Cooper, Jonathan; Sayers, Paul; Hall, Jim

    2015-01-01

    Flood and coastal erosion risk management has always faced the challenge of decision making in the face of multiple uncertainties relating to the climate, the economy and society. Traditionally, this has been addressed by adopting a precautionary approach that seeks to protect against a reasonable worst case. However, a managed adaptive approach can offer advantages. The benefits include improved resilience to negative changes, enabling opportunities from positive changes and greater cost eff...

  16. Interesting Metrics Based Adaptive Prediction Technique for Knowledge Discovery

    G. Anbukkarasy; N. Sairam

    2013-01-01

    Prediction is considered as an important factor to predict the future results from the existing information. Decision tree methodology is widely used for predicting the results. But this is not efficient to handle the large, heterogeneous or multi-featured type of data sources. So an adaptive prediction method is proposed by combining the statistical analysis approach of the data mining methods along with the decision tree prediction methodology. So when dealing with large and multi-server ba...

  17. Experimental Investigation on Adaptive Robust Controller Designs Applied to Constrained Manipulators

    Marco H. Terra

    2013-04-01

    Full Text Available In this paper, two interlaced studies are presented. The first is directed to the design and construction of a dynamic 3D force/moment sensor. The device is applied to provide a feedback signal of forces and moments exerted by the robotic end-effector. This development has become an alternative solution to the existing multi-axis load cell based on static force and moment sensors. The second one shows an experimental investigation on the performance of four different adaptive nonlinear H∞ control methods applied to a constrained manipulator subject to uncertainties in the model and external disturbances. Coordinated position and force control is evaluated. Adaptive procedures are based on neural networks and fuzzy systems applied in two different modeling strategies. The first modeling strategy requires a well-known nominal model for the robot, so that the intelligent systems are applied only to estimate the effects of uncertainties, unmodeled dynamics and external disturbances. The second strategy considers that the robot model is completely unknown and, therefore, intelligent systems are used to estimate these dynamics. A comparative study is conducted based on experimental implementations performed with an actual planar manipulator and with the dynamic force sensor developed for this purpose.

  18. Learning Rate Updating Methods Applied to Adaptive Fuzzy Equalizers for Broadband Power Line Communications

    Ribeiro Moisés V

    2004-01-01

    Full Text Available This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.

  19. Adapted strategic plannig model applied to small business: a case study in the fitness area

    Eduarda Tirelli Hennig

    2012-06-01

    Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.

  20. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper; Jensen, Jørgen Arendt

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... spectral Capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slowtime and depth. The Blood Amplitude and Phase Estimation technique (BAPES) is based on finding a set of matched filters (one for each velocity component of interest) and filtering...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How...

  1. Improving Skill Development: An Exploratory Study Comparing a Philosophical and an Applied Ethical Analysis Technique

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-01-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of…

  2. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    Muller, Laurent Flindt; Spoorendonk, Simon

    programming solver and its built-in feasibility heuristics is used to search a neighborhood for improving solutions. The general reoptimization approach used for repairing solutions is specifically suited for combinatorial problems where it may be hard to otherwise design operations to define a neighborhood......This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...

  3. An adaptive range-query optimization technique with distributed replicas

    Sayar Ahmet; Pierce Marlon; Fox C.Geoffrey

    2014-01-01

    Replication is an approach often used to speed up the execution of queries submitted to a large dataset. A compile-time/run-time approach is presented for minimizing the response time of 2-dimensional range when a distributed replica of a dataset exists. The aim is to partition the query payload (and its range) into subsets and distribute those to the replica nodes in a way that minimizes a client’s response time. However, since query size and distribution characteristics of data (data dense/sparse regions) in varying ranges are not known a priori, performing efficient load balancing and parallel processing over the unpredictable workload is difficult. A technique based on the creation and manipulation of dynamic spatial indexes for query payload estimation in distributed queries was proposed. The effectiveness of this technique was demonstrated on queries for analysis of archived earthquake-generated seismic data records.

  4. Adaptive Ant Colony Clustering Method Applied to Finding Closely Communicating Community

    Yan Liu

    2012-02-01

    Full Text Available The investigation of community structures in networks is an important issue in many domains and disciplines. Closely communicating community is different from the traditional community which emphasize particularly on structure or context. Our previous method played more emphasis on the feasibility that ant colony algorithm applied to community detection. However the essence of closely communicating community did not be described clearly. In this paper, the definition of closely communicating community is put forward firstly, the four features are described and corresponding methods are introduced to achieve the value of features between each pair. Meanwhile, pair propinquity and local propinquity are put forward and used to guide ants’ decision. Based on the previous work, the closely communicating community detection method is improved in four aspects of adaptive adjusting, which are entropy based weight modulation, combining historical paths and random wandering to select next coordination, the strategy of forcing unloading and the adaptive change of ant’s eyesight. The value selection of parameters is discussed in the portion of experiments, and the results also reveal the improvement of our algorithm in adaptive djusting.

  5. Applying Web Usability Techniques to Assess Student Awareness of Library Web Resources

    Krueger, Janice; Ray, Ron L.; Knight, Lorrie

    2004-01-01

    The authors adapted Web usability techniques to assess student awareness of their library's Web site. Students performed search tasks using a Web browser. Approaches were categorized according to a student's preference for, and success with, the library's Web resources. Forty-five percent of the students utilized the library's Web site as first…

  6. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi

    2016-04-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.

  7. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques.

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2016-04-21

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  8. An adaptive laser beam shaping technique based on a genetic algorithm

    Ping Yang; Yuan Liu; Wei Yang; Minwu Ao; Shijie Hu; Bing Xu; Wenhan Jiang

    2007-01-01

    @@ A new adaptive beam intensity shaping technique based on the combination of a 19-element piezo-electricity deformable mirror (DM) and a global genetic algorithm is presented. This technique can adaptively adjust the voltages of the 19 actuators on the DM to reduce the difference between the target beam shape and the actual beam shape. Numerical simulations and experimental results show that within the stroke range of the DM, this technique can be well used to create the given beam intensity profiles on the focal plane.

  9. Adaptive Input-Output Linearization Technique for Robust Speed Control of Brush less DC Motor

    Kim, Kyeong Hwa; Baik, In Cheol; Kim, Hyun Soo; Youn, Myung Joong [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-06-01

    An adaptive input-output linearization technique for a robust speed control of a brush less DC (BLDC) motor is presented. By using this technique, the nonlinear motor model can be effectively linearized in Brunovski canonical form, and the desired speed dynamics can be obtained based on the linearized model. This control technique, however, gives an undesirable output performance under the mismatch of the system parameters and load conditions caused by the incomplete linearization. For the robust output response, the controller parameters will be estimated by a model reference adaptive technique where the disturbance torque and flux linkage are estimated. The adaptation laws are derived by the Popov`s hyper stability theory and positivity concept. The proposed control scheme is implemented on a BLDC motor using the software of DSP TMS320C30 and the effectiveness is verified through the comparative simulations and experiments. (author). 14 refs., 12 figs., 1 tab.

  10. (Costing) The adaption of product cost estimation techniques to estimate the cost of service.

    Huang, Estelle; Newnes, Linda B; Parry, Glenn

    2011-01-01

    Abstract This paper presents an approach to ascertain whether product cost estimating techniques can be adapted for use in estimating the costs for providing a service. The research methodology adopted consists of a critique and analysis of the literature to ascertain how current cost estimation techniques are used. The analysis of the cost estimation techniques provides knowledge of cost estimation, in particular for products and service with advantages and drawbacks defined. Th...

  11. Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems

    2000-01-01

    The performance of analytical derivative and sparse matrix techniques applied to a traditional densesequential quadratic programming(SQP) is studied, and the strategy utilizing those techniques is also presented. Computational results on two typicalchemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy ispromising and suitable for large-scale process optimization problems.

  12. Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems

    钟卫涛; 邵之江; 张余岳; 钱积新

    2000-01-01

    The performance of analytical derivative and sparse matrix techniques applied to a traditional dense sequential quadratic programming (SQP) is studied, and the strategy utilizing those techniques is also presented.Computational results on two typical chemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy is promising and suitable for large-scale process optimization problems.

  13. Adaptive and model-based control theory applied to convectively unstable flows

    Fabbiane, N; Bagheri, S; Henningson, D S

    2014-01-01

    Research on active control for the delay of laminar-turbulent transition in boundary layers has made a significant progress in the last two decades, but the employed strategies have been many and dispersed. Using one framework, we review model-based techniques, such as linear-quadratic regulators, and model-free adaptive methods, such as least-mean square filters. The former are supported by a elegant and powerful theoretical basis, whereas the latter may provide a more practical approach in the presence of complex disturbance environments, that are difficult to model. We compare the methods with a particular focus on efficiency, practicability and robustness to uncertainties. Each step is exemplified on the one-dimensional linearized Kuramoto-Sivashinsky equation, that shows many similarities with the initial linear stages of the transition process of the flow over a flat plate. Also, the source code for the examples are provided.

  14. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  15. Comparision of nerve stimulator and ultrasonography as the techniques applied for brachial plexus anesthesia

    2011-01-01

    Background Brachial plexus block is useful for upper extremity surgery, and many techniques are available. The aim of our study was to compare the efficacy of axillary brachial plexus block using an ultrasound technique to the peripheral nerve stimulation technique. Methods 60 patients scheduled for surgery of the forearm or hand were randomly allocated into two groups (n = 30 per group). For Group 1; US, and for Group 2 PNS was applied. The quality and the onset of the sensorial and motor bl...

  16. Adaptive Pointing Design and Evaluation of a Precision Enhancing Technique for Absolute Pointing Devices

    König, Werner A.; Gerken, Jens; Dierdorf, Stefan; Reiterer, Harald

    2009-01-01

    We present Adaptive Pointing, a novel approach to addressing the common problem of accuracy when using absolute pointing devices for distant interaction. First, we discuss extensively some related work concerning the problem-domain of pointing accuracy when using absolute or relative pointing devices. As a result, we introduce a novel classification scheme to more clearly discriminate between different approaches. Second, the Adaptive Pointing technique is presented and described in detail. ...

  17. Adaptive Readout Technique For A Sixteen Channel Peak Sensing ADC In the FERA Format

    An adaptive, variable block-size readout technique for use with multiple, sixteen-channel CAMAC ADCs with a FERA-bus readout has been developed and designed. It can be used to read data from experiments with or without coincidence, i.e. singles, without having to change the readout protocol. Details of the implementation are discussed and initial results are presented. Further applications of the adaptive readout are also discussed

  18. Comparison of different automatic adaptive threshold selection techniques for estimating discharge from river width

    Elmi, Omid; Javad Tourian, Mohammad; Sneeuw, Nico

    2015-04-01

    The importance of river discharge monitoring is critical for e.g., water resource planning, climate change, hazard monitoring. River discharge has been measured at in situ gauges for more than a century. Despite various attempts, some basins are still ungauged. Moreover, a reduction in the number of worldwide gauging stations increases the interest to employ remote sensing data for river discharge monitoring. Finding an empirical relationship between simultaneous in situ measurements of discharge and river widths derived from satellite imagery has been introduced as a straightforward remote sensing alternative. Classifying water and land in an image is the primary task for defining the river width. Water appears dark in the near infrared and infrared bands in satellite images. As a result low values in the histogram usually represent the water content. In this way, applying a threshold on the image histogram and separating into two different classes is one of the most efficient techniques to build a water mask. Beside its simple definition, finding the appropriate threshold value in each image is the most critical issue. The threshold is variable due to changes in the water level, river extent, atmosphere, sunlight radiation, onboard calibration of the satellite over time. These complexities in water body classification are the main source of error in river width estimation. In this study, we are looking for the most efficient adaptive threshold algorithm to estimate the river discharge. To do this, all cloud free MODIS images coincident with the in situ measurement are collected. Next a number of automatic threshold selection techniques are employed to generate different dynamic water masks. Then, for each of them a separate empirical relationship between river widths and discharge measurements are determined. Through these empirical relationships, we estimate river discharge at the gauge and then validate our results against in situ measurements and also

  19. Neural and fuzzy computation techniques for playout delay adaptation in VoIP networks.

    Ranganathan, Mohan Krishna; Kilmartin, Liam

    2005-09-01

    Playout delay adaptation algorithms are often used in real time voice communication over packet-switched networks to counteract the effects of network jitter at the receiver. Whilst the conventional algorithms developed for silence-suppressed speech transmission focused on preserving the relative temporal structure of speech frames/packets within a talkspurt (intertalkspurt adaptation), more recently developed algorithms strive to achieve better quality by allowing for playout delay adaptation within a talkspurt (intratalkspurt adaptation). The adaptation algorithms, both intertalkspurt and intratalkspurt based, rely on short term estimations of the characteristics of network delay that would be experienced by up-coming voice packets. The use of novel neural networks and fuzzy systems as estimators of network delay characteristics are presented in this paper. Their performance is analyzed in comparison with a number of traditional techniques for both inter and intratalkspurt adaptation paradigms. The design of a novel fuzzy trend analyzer system (FTAS) for network delay trend analysis and its usage in intratalkspurt playout delay adaptation are presented in greater detail. The performance of the proposed mechanism is analyzed based on measured Internet delays. Index Terms-Fuzzy delay trend analysis, intertalkspurt, intratalkspurt, multilayer perceptrons (MLPs), network delay estimation, playout buffering, playout delay adaptation, time delay neural networks (TDNNs), voice over Internet protocol (VoIP). PMID:16252825

  20. An efficient Video Segmentation Algorithm with Real time Adaptive Threshold Technique

    Yasira Beevi C P

    2009-12-01

    Full Text Available Automatic video segmentation plays an important role in real-time MPEG-4 encoding systems. This paper presents a video segmentation algorithm for MPEG-4 camera system with change detection, background registration techniques and real time adaptive thresholdtechniques. This algorithm can give satisfying segmentation results with low computation load. Besides, it has shadow cancellation mode, which can deal with light changing effect and shadow effect. Furthermore, this algorithm also implemented real time adaptive threshold techniques by which the parameters can be decided automatically.

  1. Optimal control techniques for the adaptive optics system of the LBT

    Agapito, G.; Quiros-Pacheco, F.; Tesi, P.; Esposito, S.; Xompero, M.

    2008-07-01

    In this paper we will discuss the application of different optimal control techniques for the adaptive optics system of the LBT telescope which comprises a pyramid wavefront sensor and an adaptive secondary mirror. We have studied the application of both the Kalman and the H∞ filter to estimate the temporal evolution of the phase perturbations due to the atmospheric turbulence and the telescope vibrations. We have evaluated the performance of these control techniques with numerical simulations in preparation of the laboratory tests that will be carried out in the Arcetri laboratories.

  2. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    Wang, Yong; Cai, Zixing; Zhou, Yuren; Fan, Zhun

    2009-01-01

    A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...... performance with respect to some other state-of-the-art approaches in constrained evolutionary optimization....

  3. Raviart-Thomas-type sources adapted to applied EEG and MEG: implementation and results

    Pursiainen, S.

    2012-06-01

    This paper studies numerically electroencephalography and magnetoencephalography (EEG and MEG), two non-invasive imaging modalities in which external measurements of the electric potential and the magnetic field are, respectively, utilized to reconstruct the primary current density (neuronal activity) of the human brain. The focus is on adapting a Raviart-Thomas-type source model to meet the needs of EEG and MEG applications. The goal is to construct a model that provides an accurate approximation of dipole source currents and can be flexibly applied to different reconstruction strategies as well as to realistic computation geometries. The finite element method is applied in the simulation of the data. Least-squares fit interpolation is used to establish Cartesian source directions, which guarantee that the recovered current field is minimally dependent on the underlying finite element mesh. Implementation is explained in detail and made accessible, e.g., by using quadrature-free formulae and the Gaussian one-point rule in numerical integration. Numerical results are presented concerning, for example, the iterative alternating sequential inverse algorithm as well as resolution, smoothness and local refinement of the finite element mesh. Both spherical and pseudo-realistic head models, as well as real MEG data, are utilized in the numerical experiments.

  4. Raviart–Thomas-type sources adapted to applied EEG and MEG: implementation and results

    This paper studies numerically electroencephalography and magnetoencephalography (EEG and MEG), two non-invasive imaging modalities in which external measurements of the electric potential and the magnetic field are, respectively, utilized to reconstruct the primary current density (neuronal activity) of the human brain. The focus is on adapting a Raviart–Thomas-type source model to meet the needs of EEG and MEG applications. The goal is to construct a model that provides an accurate approximation of dipole source currents and can be flexibly applied to different reconstruction strategies as well as to realistic computation geometries. The finite element method is applied in the simulation of the data. Least-squares fit interpolation is used to establish Cartesian source directions, which guarantee that the recovered current field is minimally dependent on the underlying finite element mesh. Implementation is explained in detail and made accessible, e.g., by using quadrature-free formulae and the Gaussian one-point rule in numerical integration. Numerical results are presented concerning, for example, the iterative alternating sequential inverse algorithm as well as resolution, smoothness and local refinement of the finite element mesh. Both spherical and pseudo-realistic head models, as well as real MEG data, are utilized in the numerical experiments. (paper)

  5. Research on key techniques of virtual reality applied in mining industry

    LIAO Jun; LU Guo-bin

    2009-01-01

    Based on the applications of virtual reality technology in many fields, introduced the virtual reality technical basic concept, structure type, related technique development, etc., tallied up applications of virtual reality technique in the present mining industry, inquired into core techniques related software and hardware, especially the optimization in the setup of various 3D models technique, and carried out a virtual scene to travel extensively in real-time by stereoscopic manifestation technique and so on. Then it brought forward the solution of virtual reality technique with software and hardware to the mining industry that can satisfy the demand of different aspects and levers. Finally, it show a fine prospect of virtual reality technique applied in the mining industry.

  6. Investigation about the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils

    Adriano Pinto Mariano

    2009-10-01

    Full Text Available This work investigated the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils collected at three service stations. Batch biodegradation experiments were carried out in Bartha biometer flasks (250 mL used to measure the microbial CO2 production. Biodegradation efficiency was also measured by quantifying the concentration of hydrocarbons. In addition to the biodegradation experiments, the capability of the studied cultures and the native microorganisms to biodegrade the diesel oil purchased from a local service station, was verified using a technique based on the redox indicator 2,6 -dichlorophenol indophenol (DCPIP. Results obtained with this test showed that the inocula used in the biodegradation experiments were able to degrade the diesel oil and the tests carried out with the native microorganisms indicated that these soils had a microbiota adapted to degrade the hydrocarbons. In general, no gain was obtained with the addition of microorganisms or even negative effects were observed in the biodegradation experiments.Este trabalho investigou a eficiência da técnica do bioaumento quando aplicada a solos contaminados com óleo diesel coletados em três postos de combustíveis. Experimentos de biodegradação foram realizados em frascos de Bartha (250 mL, usados para medir a produção microbiana de CO2. A eficiência de biodegradação também foi quantificada pela concentração de hidrocarbonetos. Conjuntamente aos experimentos de biodegradação, a capacidade das culturas estudadas e dos microrganismos nativos em biodegradar óleo diesel comprado de um posto de combustíveis local, foi verificada utilizando-se a técnica baseada no indicador redox 2,6 - diclorofenol indofenol (DCPIP. Resultados obtidos com esse teste mostraram que os inóculos empregados nos experimentos de biodegradação foram capazes de biodegradar óleo diesel e os testes com os microrganismos nativos indicaram que estes solos

  7. Goal-based angular adaptivity applied to the spherical harmonics discretisation of the neutral particle transport equation

    Highlights: • A variable order spherical harmonics scheme is presented. • An adaptive process is proposed to automatically refine the angular resolution. • A regular error estimator and a goal-based error estimator are presented. • The adaptive methods are applied to fixed source and eigenvalue problems. • Adaptive methods give more accurate solutions than uniform angular resolution. - Abstract: A variable order spherical harmonics scheme has been described and employed for the solution of the neutral particle transport equation. The scheme is specifically described with application within the inner-element sub-grid scale finite element spatial discretisation. The angular resolution is variable across both the spatial and energy dimensions. That is, the order of the spherical harmonic expansion may differ at each node of the mesh for each energy group. The variable order scheme has been used to develop adaptive methods for the angular resolution of the particle transport phase-space. Two types of adaptive method have been developed and applied to examples. The first is regular adaptivity, in which the error in the solution over the entire domain is minimised. The second is goal-based adaptivity, in which the error in a specified functional is minimised. The methods were applied to fixed source and eigenvalue examples. Both methods demonstrate an improved accuracy for a given number of degrees of freedom in the angular discretisation

  8. Simulation of energy saving potential of a centralized HVAC system in an academic building using adaptive cooling technique

    Highlights: • We have simulated and validated the cooling loads of a multi-zone academic building, in a tropical region. • We have analyzed the effect of occupancy patterns on the cooling loads. • Adaptive cooling technique has been utilized to minimize the energy usage of HVAC system. • The results are promising and show a reduction of energy saving in the range of 20–30%. - Abstract: Application of adaptive comfort temperature as room temperature set points potentially reduce energy usage of the HVAC system during a cooling and heating period. The savings are mainly due to higher indoor temperature set point during hot period and lower indoor temperature set point during cold period than the recommended value. Numerous works have been carried out to show how much energy can be saved during cooling and heating period by applying adaptive comfort temperature. The previous work, however, focused on a continuous cooling load as found in many office and residential buildings. Therefore, this paper aims to simulate the energy saving potential for an academic glazed building in tropical Malaysian climate by developing adaptive cooling technique. A building simulation program (TRNSYS) was used to model the building and simulate the cooling load characteristic using current and proposed technique. Two experimental measurements were conducted and the results were used to validate the model. Finally, cooling load characteristic of the academic building using current and proposed technique were compared and the results showed that annual energy saving potential as much as 305,150 kW h can be achieved

  9. Frequency and Spatial Domains Adaptive-based Enhancement Technique for Thermal Infrared Images

    Debasis Chaudhuri

    2014-09-01

    Full Text Available Low contrast and noisy image limits the amount of information conveyed to the user. With the proliferation of digital imagery and computer interface between man-and-machine, it is now viable to consider digital enhancement in the image before presenting it to the user, thus increasing the information throughput. With better contrast, target detection and discrimination can be improved. The paper presents a sequence of filtering operations in frequency and spatial domains to improve the quality of the thermal infrared (IR images. Basically, two filters – homomorphic filter followed by adaptive Gaussian filter are applied to improve the quality of the thermal IR images. We have systematically evaluated the algorithm on a variety of images and carefully compared it with the techniques presented in the literature. We performed an evaluation of three filter banks such as homomorphic, Gaussian 5×5 and the proposed method, and we have seen that the proposed method yields optimal PSNR for all the thermal images. The results demonstrate that the proposed algorithm is efficient for enhancement of thermal IR images.Defence Science Journal, Vol. 64, No. 5, September 2014, pp.451-457, DOI:http://dx.doi.org/10.14429/dsj.64.6873

  10. A SELF-ADAPTIVE TECHNIQUE FOR A KIND OF NONLINEAR CONJUGATE GRADIENT METHODS

    王丽平

    2004-01-01

    Conjugate gradient methods. are a class of important methods for unconstrained optimization, especially when the dimension is large. In 2001, Dai and Liao have proposed a new conjugate condition, based on it two nonlinear conjugate gradient methods are constructed. With trust region idea, this paper gives a self-adaptive technique for the two methods. The numerical results show that this technique works well for the given nonlinear optimization test problems.

  11. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  12. Adaptations in physiology and propulsion techniques during the initial phase of learning manual wheelchair propulsion

    de Groot, S; Veeger, H E J; Hollander, A P; van der Woude, L H V

    2003-01-01

    OBJECTIVE: The purpose of this study was to analyze adaptations in gross mechanical efficiency and wheelchair propulsion technique in novice able-bodied subjects during the initial phase of learning hand-rim wheelchair propulsion. DESIGN: Nine able-bodied subjects performed three 4-min practice bloc

  13. Database 'catalogue of techniques applied to materials and products of nuclear engineering'

    The database 'Catalogue of techniques applied to materials and products of nuclear engineering' (IS MERI) was developed to provide informational support for SSC RF RIAR and other enterprises in scientific investigations. This database contains information on the techniques used at RF Minatom enterprises for reactor material properties investigation. The main purpose of this system consists in the assessment of the current status of the reactor material science experimental base for the further planning of experimental activities and methodical support improvement. (author)

  14. Data Mining E-protokol - Applying data mining techniques on student absence

    Shrestha, Amardip; Bro Lilleås, Lauge; Hansen, Asbjørn

    2014-01-01

    The scope of this project is to explore the possibilities in applying data mining techniques for discovering new knowledge about student absenteeism in primary school. The research consists in analyzing a large dataset collected through the digital protocol system E-protokol. The data mining techniques used for the analysis involves clustering, classification and association rule mining, which are utilized using the machine learning toolset WEKA. The findings includes a number of suggestions ...

  15. Sweep as a Generic Pruning Technique Applied to the Non-Overlapping Rectangles Constraint

    Beldiceanu, Nicolas; Carlsson, Mats

    2001-01-01

    We first present a generic pruning technique which aggregates several constraints sharing some variables. The method is derived from an idea called \\dfn{sweep} which is extensively used in computational geometry. A first benefit of this technique comes from the fact that it can be applied on several families of global constraints. A second main advantage is that it does not lead to any memory consumption problem since it only requires temporary memory that can be reclaimed after each invocat...

  16. Development of Promising Insulating Oil and Applied Techniques of EHD, ER·MR

    Hanaoka, Ryoichi

    The development of an environment-friendly insulating liquid has been noticed for a new design of oil-filled power apparatus such as transformer from viewpoints of the protection of the environment. The dielectric liquids can also widely be applied to various fields which are concerned in the electromagnetic field. This article introduces the recent trend on promising new vegetable based oil as an electrical insulation, and EHD pumping, ER fluid and MR fluid as the applied techniques of dielectric liquids.

  17. A Novel Adaptive Elite-Based Particle Swarm Optimization Applied to VAR Optimization in Electric Power Systems

    2014-01-01

    Particle swarm optimization (PSO) has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1) appending the mean search to the original approach and (2) pruning/cloning par...

  18. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency

  19. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    Goffin, Mark A., E-mail: mark.a.goffin@gmail.com [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, London, SW7 2AZ (United Kingdom); Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C. [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, London, SW7 2AZ (United Kingdom); Smith, Paul N. [ANSWERS Software Service, AMEC, Kimmeridge House, Dorset Green Technology Park, Winfrith Newburgh, Dorchester, Dorset, DT2 8ZB (United Kingdom); Smedley-Stevenson, Richard P. [AWE, Aldermaston, Reading, RG7 4PR (United Kingdom)

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  20. Dual Adaptive Filtering by Optimal Projection Applied to Filter Muscle Artifacts on EEG and Comparative Study

    Samuel Boudet

    2014-01-01

    Full Text Available Muscle artifacts constitute one of the major problems in electroencephalogram (EEG examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings.

  1. Assessment of Multi-Joint Coordination and Adaptation in Standing Balance: A Novel Device and System Identification Technique.

    Engelhart, Denise; Schouten, Alfred C; Aarts, Ronald G K M; van der Kooij, Herman

    2015-11-01

    The ankles and hips play an important role in maintaining standing balance and the coordination between joints adapts with task and conditions, like the disturbance magnitude and type, and changes with age. Assessment of multi-joint coordination requires the application of multiple continuous and independent disturbances and closed loop system identification techniques (CLSIT). This paper presents a novel device, the double inverted pendulum perturbator (DIPP), which can apply disturbing forces at the hip level and between the shoulder blades. In addition to the disturbances, the device can provide force fields to study adaptation of multi-joint coordination. The performance of the DIPP and a novel CLSIT was assessed by identifying a system with known mechanical properties and model simulations. A double inverted pendulum was successfully identified, while force fields were able to keep the pendulum upright. The estimated dynamics were similar as the theoretical derived dynamics. The DIPP has a sufficient bandwidth of 7 Hz to identify multi-joint coordination dynamics. An experiment with human subjects where a stabilizing force field was rendered at the hip (1500 N/m), showed that subjects adapt by lowering their control actions around the ankles. The stiffness from upper and lower segment motion to ankle torque dropped with 30% and 48%, respectively. Our methods allow to study (pathological) changes in multi-joint coordination as well as adaptive capacity to maintain standing balance. PMID:25423654

  2. Strategies and techniques of communication and public relations applied to non-profit sector

    Ioana – Julieta Josan

    2010-05-01

    Full Text Available The aim of this paper is to summarize the strategies and techniques of communication and public relations applied to non-profit sector.The approach of the paper is to identify the most appropriate strategies and techniques that non-profit sector can use to accomplish its objectives, to highlight specific differences between the strategies and techniques of the profit and non-profit sectors and to identify potential communication and public relations actions in order to increase visibility among target audience, create brand awareness and to change into positive brand sentiment the target perception about the non-profit sector.

  3. Applying Modern Techniques and Carrying Out English .Extracurricular—— On the Model United Nations Activity

    XuXiaoyu; WangJian

    2004-01-01

    This paper is an introduction of the extracurricular activity of the Model United Nations in Northwestern Polyteehnical University (NPU) and it focuses on the application of the modem techniques in the activity and the pedagogical theories applied in it. An interview and questionnaire research will reveal the influence of the Model United Nations.

  4. An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis

    Chien, T. T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.

  5. Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems

    Brandt, A.

    1978-01-01

    The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.

  6. Difficulties applying recent blind source separation techniques to EEG and MEG

    Knuth, Kevin H

    2015-01-01

    High temporal resolution measurements of human brain activity can be performed by recording the electric potentials on the scalp surface (electroencephalography, EEG), or by recording the magnetic fields near the surface of the head (magnetoencephalography, MEG). The analysis of the data is problematic due to the fact that multiple neural generators may be simultaneously active and the potentials and magnetic fields from these sources are superimposed on the detectors. It is highly desirable to un-mix the data into signals representing the behaviors of the original individual generators. This general problem is called blind source separation and several recent techniques utilizing maximum entropy, minimum mutual information, and maximum likelihood estimation have been applied. These techniques have had much success in separating signals such as natural sounds or speech, but appear to be ineffective when applied to EEG or MEG signals. Many of these techniques implicitly assume that the source distributions hav...

  7. ADAPTATION OF CRACK GROWTH DETECTION TECHNIQUES TO US MATERIAL TEST REACTORS

    A. Joseph Palmer; Sebastien P. Teysseyre; Kurt L. Davis; Gordon Kohse; Yakov Ostrovsky; David M. Carpenter; Joy L. Rempe

    2015-04-01

    A key component in evaluating the ability of Light Water Reactors to operate beyond 60 years is characterizing the degradation of materials exposed to radiation and various water chemistries. Of particular concern is the response of reactor materials to Irradiation Assisted Stress Corrosion Cracking (IASCC). Some test reactors outside the United States, such as the Halden Boiling Water Reactor (HBWR), have developed techniques to measure crack growth propagation during irradiation. The basic approach is to use a custom-designed compact loading mechanism to stress the specimen during irradiation, while the crack in the specimen is monitored in-situ using the Direct Current Potential Drop (DCPD) method. In 2012 the US Department of Energy commissioned the Idaho National Laboratory and the MIT Nuclear Reactor Laboratory (MIT NRL) to take the basic concepts developed at the HBWR and adapt them to a test rig capable of conducting in-pile IASCC tests in US Material Test Reactors. The first two and half years of the project consisted of designing and testing the loader mechanism, testing individual components of the in-pile rig and electronic support equipment, and autoclave testing of the rig design prior to insertion in the MIT Reactor. The load was applied to the specimen by means of a scissor like mechanism, actuated by a miniature metal bellows driven by pneumatic pressure and sized to fit within the small in-core irradiation volume. In addition to the loader design, technical challenges included developing robust connections to the specimen for the applied current and voltage measurements, appropriate ceramic insulating materials that can endure the LWR environment, dealing with the high electromagnetic noise environment of a reactor core at full power, and accommodating material property changes in the specimen, due primarily to fast neutron damage, which change the specimen resistance without additional crack growth. The project culminated with an in

  8. How to Apply Student-centered Teaching Techniques in a Large Class%How to Apply Student-centered Teaching Techniques in a Larae Class

    李焱

    2008-01-01

    It is very common to have a class of 50 or more students in Chinese schools,and teaching a foreign language effectively to a large class is really hard work.In order to change the teacher-centered teaching model into the student-centered one,Teachers should keep students' needs,interests,and learning styles in mind,apply several kinds of teaching techniques,organize different clas$1~OIlll activities and encourage,praise and appreciate both students' success and learning process all the time.If teachers place more responsibility in the hands of students,serve as "presenter or facilitator of knowledge"instead of "source of all knowledge",they can greatly motivate students to learn the language in a very active,cooperative andeffectiveway.After all,peoplelearn by doing,not only by watching andlistening.

  9. Measurement of the magnitude of force applied by students when learning a mobilisation technique

    E. Smit

    2003-02-01

    Full Text Available Passive accessory intervertebral movements (PAIVM’s are frequently used by physiotherapists in the  assessment and management of patients. Studies investigating the reliability of passive mobilisation techniques have shown conflicting results. Therefore, standardisation of PAIVM’s is essential for research and teaching purposes, which could result in better clinical management. In order to standardise graded passive mobilisation techniques, a reliable, easy-to-use, objective measurement tool must be used. The aim of this  study was to determine whether it is necessary to quantify the magnitude of force applied when teaching a grade I central  posteroanterior (PA mobilisation technique (according to Maitland on the cervical spine. An objective measurement tool (FlexiForceTM was used to determine the consistency of force applied by third and fourth year physiotherapy students while performing this technique. Twenty third- and 20 fourth year physiotherapy students (n=40 were randomly selected. Each subject performed a grade I central PA on sensors placed on C6 for 25 seconds. The average maximum grade 1 force applied by the third year students was  significantly higher than the force applied by the fourth year students (p=0.034. There was a significantly larger variation in applied force among third years (p=0.00043. The results indicate that the current teaching method is insufficient to ensure inter-therapist reliability amongst students, emphasising the need for an objective measurement tool to be used for teaching students. The measurement tool used in this study is economical, easily applied and is an efficient method of measuring the magnitude of force. Further research is needed to demonstrate the reliability and validity of the tool to assist teaching and research in a clinical setting.

  10. Prediction of radical scavenging activities of anthocyanins applying adaptive neuro-fuzzy inference system (ANFIS) with quantum chemical descriptors.

    Jhin, Changho; Hwang, Keum Taek

    2014-01-01

    Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627

  11. Development of ultrasound Doppler velocimetry technique applying cross-correlation processing

    Ultrasound Doppler Velocimetry technique (UDV) applying Doppler effect has been developed for measuring velocity distributions of sodium flow. As Doppler shift frequency is proportional to velocity of microparticles carried by flowing liquid, it is possible to evaluate velocity distributions of flowing liquid from Doppler shift frequency. In this report, a technique applying cross-correlation processing is proposed to derive Doppler shift frequency from the echoes of ultrasonic pulses. Verification studies of the proposed technique are conducted based on simulated echoes and actual echoes in water tests. Main results are as follows: (1) As the result of verification studies conducted based on the simulated echoes, relative error estimated by the proposed technique is about 1 percent. (2) The proposed technique is an effective measures for the reduction of noise signals. (3) The velocity distributions of water flowing in a pipe are evaluated in the experiments. The velocity distributions evaluated by the proposed technique is almost equivalent to that of turbulent flow evaluated by 1/7th power law. (author)

  12. Coastal Adaptation to Climate Change. Can the IPCC Technical Guidelines be applied?

    This paper evaluates the IPCC Technical Guidelines for Assessing Climate Change Impacts and Adaptations with respect to the guidance offered for coastal-adaptation assessment. It appears that the IPCC Technical Guidelines focus strongly on implementation. This paper uses both conceptual, and empirical information is used in this paper to show that coastal adaptation embraces more than selecting one of the 'technical' options to respond to sea-level rise (retreat, accommodate or protect). Coastal adaptation is a more complex and iterative process with a series of policy cycles. To be effective, an expanded adaptation framework involving four steps is suggested, including (1) information collection and awareness raising; (2) planning and design; (3) implementation, and (4) monitoring and evaluation. The incomplete coverage of these four steps in existing coastal-adaptation assessments constrains the development of adaptation strategies that are supported by the relevant actors and integrated into existing management. Researchers and policy-makers are recommended to work together to establish a framework for adaptation that is integrated within current coastal management processes and practices and takes a broader view on the subject. 46 refs

  13. Adaptive Fuzzy Output-Feedback Method Applied to Fin Control for Time-Delay Ship Roll Stabilization

    Rui Bai

    2014-01-01

    Full Text Available The ship roll stabilization by fin control system is considered in this paper. Assuming that angular velocity in roll cannot be measured, an adaptive fuzzy output-feedback control is investigated. The fuzzy logic system is used to approximate the uncertain term of the controlled system, and a fuzzy state observer is designed to estimate the unmeasured states. By utilizing the fuzzy state observer and combining the adaptive backstepping technique with adaptive fuzzy control design, an observer-based adaptive fuzzy output-feedback control approach is developed. It is proved that the proposed control approach can guarantee that all the signals in the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB, and the control strategy is effective to decrease the roll motion. Simulation results are included to illustrate the effectiveness of the proposed approach.

  14. Optimization technique applied to interpretation of experimental data and research of constitutive laws

    The feasibility of identification technique applied to one dimensional numerical analysis of the split-Hopkinson pressure bar experiment is proven. A general 1-D elastic-plastic-viscoplastic computer program was written down so as to give an adequate solution for elastic-plastic-viscoplastic response of a pressure bar subjected to a general Heaviside step loading function in time which is applied over one end of the bar. Special emphasis is placed on the response of the specimen during the first microseconds where no equilibrium conditions can be stated. During this transient phase discontinuity conditions related to wave propagation are encountered and must be carefully taken into account. Having derived an adequate numerical model, then Pontryagin identification technique has been applied in such a way that the unknowns are physical parameters. The solutions depend mainly on the selection of a class of proper eigen objective functionals (cost functions) which may be combined so as to obtain a convenient numerical objective function. A number of significant questions arising in the choice of parameter adjustment algorithms are discussed. In particular, this technique leads to a two point boundary value problem which has been solved using an iterative gradient like technique usually referred to as a double operator gradient method. This method combines the classical Fletcher-Powell technique and a partial quadratic technique with an automatic parameter step size selection. This method is much more efficient than usual ones. Numerical experimentation with simulated data was performed to test the accuracy and stability of the identification algorithm and to determine the most adequate type and quantity of data for estimation purposes

  15. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  16. Renormalization techniques applied to the study of density of states in disordered systems

    A general scheme for real space renormalization of formal scattering theory is presented and applied to the calculation of density of states (DOS) in some finite width systems. This technique is extended in a self-consistent way, to the treatment of disordered and partially ordered chains. Numerical results of moments and DOS are presented in comparison with previous calculations. In addition, a self-consistent theory for the magnetic order problem in a Hubbard chain is derived and a parametric transition is observed. Properties of localization of the electronic states in disordered chains are studied through various decimation averaging techniques and using numerical simulations. (author)

  17. Phase-shifting technique applied to circular harmonic-based joint transform correlator

    2000-01-01

    The phase-shifting technique is applied to the circular harmonic expansion-based joint transform correlator. Computer simulation has shown that the light efficiency and the discrimination capability are greatly enhanced, and the full rotation invariance is preserved after the phase-shifting technique has been used. A rotation-invariant optical pattern recognition with high discrimination capability and high light efficiency is obtained. The influence of the additive noise on the performance of the correlator is also investigated. However, the anti-noise capability of this kind of correlator still needs improving.

  18. Best Available Technique (BAT) assessment applied to ACR-1000 waste and heavy water management systems

    The ACR-1000 design is the next evolution of the proven CANDU reactor design. One of the key objectives for this project was to systematically apply the As Low As Reasonably Achievable (ALARA) principle to the reactor design. The ACR design team selected the Best Available Technique (BAT) assessment for this purpose to document decisions made during the design of each ACR-1000 waste and heavy water management systems. This paper describes the steps in the BAT assessment that has been applied to the ACR-1000 design. (author)

  19. Micro combined heat and power home supply: Prospective and adaptive management achieved by computational intelligence techniques

    Matics, Jens; Krost, Gerhard [University of Duisburg-Essen, Bismarckstrasse 81, D-47057 Duisburg (Germany)

    2008-11-15

    Micro combined heat and power (CHP) systems for single residential buildings are seen as advantageous to combine both decentralized power supply and rather high overall efficiency. The latter presupposes flexible and adaptive plant management which has to mediate between energy cost minimization and user comfort aspects. This is achieved by use of computational intelligence (CI) techniques; structure and performance of the management system are shown. (author)

  20. Observer-Based Control Techniques for the LBT Adaptive Optics under Telescope Vibrations

    Agapito, Guido; Quirós-Pacheco, Fernando; Tesi, Pietro; Riccardi, Armando; Esposito, Simone

    2011-01-01

    This paper addresses the application of observer-based control techniques for the adaptive optics system of the LBT telescope. In such a context, attention is focused on the use of Kalman and H∞ filters to estimate the temporal evolution of phase perturbations due to the atmospheric turbulence and the telescope vibrations acting on tip/tilt modes. We shall present preliminary laboratory experiments carried out at the Osservatorio Astrofisico di Arcetri using the Kalman filter.

  1. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    Zhiwei Ye; Mingwei Wang; Zhengbing Hu; Wei Liu

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three fa...

  2. A Novel Approach of Harris Corner Detection of Noisy Images using Adaptive Wavelet Thresholding Technique

    Dey, Nilanjan; Nandi, Pradipti; Barman, Nilanjana

    2012-01-01

    In this paper we propose a method of corner detection for obtaining features which is required to track and recognize objects within a noisy image. Corner detection of noisy images is a challenging task in image processing. Natural images often get corrupted by noise during acquisition and transmission. Though Corner detection of these noisy images does not provide desired results, hence de-noising is required. Adaptive wavelet thresholding approach is applied for the same.

  3. Adaptive Finite Elements for Systems of PDEs: Software Concepts, Multi-level Techniques and Parallelization

    Vey, Simon

    2008-01-01

    In the recent past, the field of scientific computing has become of more and more importance for scientific as well as for industrial research, playing a comparable role as experiment and theory do. This success of computational methods in scientific and engineering research is next to the enormous improvement of computer hardware to a large extend due to contributions from applied mathematicians, who have developed algorithms which make real life applications feasible. Examples are adaptive ...

  4. Sustainable Modular Adaptive Redundancy Technique Emphasizing Partial Reconfiguration for Reduced Power Consumption

    R. Al-Haddad

    2011-01-01

    Full Text Available As reconfigurable devices' capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. Organic computing paradigms have been proposed for fault-tolerant systems because they promote behaviors that allow complex digital systems to adapt and survive in demanding environments. In this paper, we develop a sustainable modular adaptive redundancy technique (SMART composed of a two-layered organic system. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA to provide self-repair using a novel approach called reconfigurable adaptive redundancy system (RARS. The software layer supervises the organic activities on the FPGA and extends the self-healing capabilities through application-independent, intrinsic, and evolutionary repair techniques that leverage the benefits of dynamic partial reconfiguration (PR. SMART was evaluated using a Sobel edge-detection application and was shown to tolerate stressful sequences of injected transient and permanent faults while reducing dynamic power consumption by 30% compared to conventional triple modular redundancy (TMR techniques, with nominal impact on the fault-tolerance capabilities. Moreover, PR is employed to keep the system on line while under repair and also to reduce repair time. Experiments have shown a 27.48% decrease in repair time when PR is employed compared to the full bitstream configuration case.

  5. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  6. Applying Advocacy Skills in Tumultuous Times: Adaptive Capacity of Insuring America's Children Grantees

    Jung Y. Kim; Victoria Peebles; Christopher A. Trenholm

    2012-01-01

    In 2007, the David and Lucile Packard Foundation established Insuring America's Children to help secure health care for all children. A new brief and executive summary document how state-based grantees responded and adapted to unprecedented changes in children's coverage over the past several years. As state economic and political contexts shifted, grantees adapted strategic partnerships, assuming new and expanded leadership roles in state-based coalitions and forging partnerships with nontra...

  7. Co-evolutionary and Reinforcement Learning Techniques Applied to Computer Go players

    Zela Moraya, Wester Edison

    2013-01-01

    The objective of this thesis is model some processes from the nature as evolution and co-evolution, and proposing some techniques that can ensure that these learning process really happens and useful to solve some complex problems as Go game. The Go game is ancient and very complex game with simple rules which still is a challenge for the Artificial Intelligence. This dissertation cover some approaches that were applied to solve this problem, proposing solve this problem using competiti...

  8. The digital geometric phase technique applied to the deformation evaluation of MEMS devices

    Quantitative evaluation of the structure deformation of microfabricated electromechanical systems is of importance for the design and functional control of microsystems. In this investigation, a novel digital geometric phase technique was developed to meet the deformation evaluation requirement of microelectromechanical systems (MEMS). The technique is performed on the basis of regular artificial lattices, instead of a natural atom lattice. The regular artificial lattices with a pitch ranging from micrometer to nanometer will be directly fabricated on the measured surface of MEMS devices by using a focused ion beam (FIB). Phase information can be obtained from the Bragg filtered images after fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT) of the scanning electronic microscope (SEM) images. Then the in-plane displacement field and the local strain field related to the phase information will be evaluated. The obtained results show that the technique can be well applied to deformation measurement with nanometer sensitivity and stiction force estimation of a MEMS device

  9. GPU peer-to-peer techniques applied to a cluster interconnect

    Ammendola, Roberto; Biagioni, Andrea; Bisson, Mauro; Fatica, Massimiliano; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Mastrostefano, Enrico; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero

    2013-01-01

    Modern GPUs support special protocols to exchange data directly across the PCI Express bus. While these protocols could be used to reduce GPU data transmission times, basically by avoiding staging to host memory, they require specific hardware features which are not available on current generation network adapters. In this paper we describe the architectural modifications required to implement peer-to-peer access to NVIDIA Fermi- and Kepler-class GPUs on an FPGA-based cluster interconnect. Besides, the current software implementation, which integrates this feature by minimally extending the RDMA programming model, is discussed, as well as some issues raised while employing it in a higher level API like MPI. Finally, the current limits of the technique are studied by analyzing the performance improvements on low-level benchmarks and on two GPU-accelerated applications, showing when and how they seem to benefit from the GPU peer-to-peer method.

  10. The block adaptive multigrid method applied to the solution of the Euler equations

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  11. Comparison between different techniques applied to quartz CPO determination in granitoid mylonites

    Fazio, Eugenio; Punturo, Rosalda; Cirrincione, Rosolino; Kern, Hartmut; Wenk, Hans-Rudolph; Pezzino, Antonino; Goswami, Shalini; Mamtani, Manish

    2016-04-01

    Since the second half of the last century, several techniques have been adopted to resolve the crystallographic preferred orientation (CPO) of major minerals constituting crustal and mantle rocks. To this aim, many efforts have been made to increase the accuracy of such analytical devices as well as to progressively reduce the time needed to perform microstructural analysis. It is worth noting that many of these microstructural studies deal with quartz CPO because of the wide occurrence of this mineral phase in crustal rocks as well as its quite simple chemical composition. In the present work, four different techniques were applied to define CPOs of dynamically recrystallized quartz domains from naturally deformed rocks collected from a ductile crustal scale shear zone in order to compare their advantages and limitation. The selected Alpine shear zone is located in the Aspromonte Massif (Calabrian Peloritani Orogen, southern Italy) representing granitoid lithotypes. The adopted methods span from "classical" universal stage (US), to image analysis technique (CIP), electron back-scattered diffraction (EBSD), and time of flight neutron diffraction (TOF). When compared, bulk texture pole figures obtained by means of these different techniques show a good correlation. Advances in analytical techniques used for microstructural investigations are outlined by discussing results of quartz CPO that are presented in this study.

  12. Wire-mesh and ultrasound techniques applied for the characterization of gas-liquid slug flow

    Ofuchi, Cesar Y.; Sieczkowski, Wytila Chagas; Neves Junior, Flavio; Arruda, Lucia V.R.; Morales, Rigoberto E.M.; Amaral, Carlos E.F.; Silva, Marco J. da [Federal University of Technology of Parana, Curitiba, PR (Brazil)], e-mails: ofuchi@utfpr.edu.br, wytila@utfpr.edu.br, neves@utfpr.edu.br, lvrarruda@utfpr.edu.br, rmorales@utfpr.edu.br, camaral@utfpr.edu.br, mdasilva@utfpr.edu.br

    2010-07-01

    Gas-liquid two-phase flows are found in a broad range of industrial applications, such as chemical, petrochemical and nuclear industries and quite often determine the efficiency and safety of process and plants. Several experimental techniques have been proposed and applied to measure and quantify two-phase flows so far. In this experimental study the wire-mesh sensor and an ultrasound technique are used and comparatively evaluated to study two-phase slug flows in horizontal pipes. The wire-mesh is an imaging technique and thus appropriated for scientific studies while ultrasound-based technique is robust and non-intrusive and hence well suited for industrial applications. Based on the measured raw data it is possible to extract some specific slug flow parameters of interest such as mean void fraction and characteristic frequency. The experiments were performed in the Thermal Sciences Laboratory (LACIT) at UTFPR, Brazil, in which an experimental two-phase flow loop is available. The experimental flow loop comprises a horizontal acrylic pipe of 26 mm diameter and 9 m length. Water and air were used to produce the two phase flow under controlled conditions. The results show good agreement between the techniques. (author)

  13. Pulsed laser deposition: the road to hybrid nanocomposites coatings and novel pulsed laser adaptive technique.

    Serbezov, Valery

    2013-01-01

    The applications of Pulsed Laser Deposition (PLD) for producing nanoparticles, nanostructures and nanocomposites coatings based on recently developed laser ablating techniques and their convergence are being reviewed. The problems of in situ synthesis of hybrid inorganic-organic nanocomposites coatings by these techniques are being discussed. The novel modification of PLD called Pulsed Laser Adaptive Deposition (PLAD) technique is presented. The in situ synthesized inorganic/organic nanocomposites coatings from Magnesium (Mg) alloy/Rhodamine B and Mg alloy/ Desoximetasone by PLAD are described. The trends, applications and future development of discussed patented methods based on the laser ablating technologies for producing hybrid nanocomposite coatings have also been discussed in this review. PMID:22747717

  14. Applying Content-Based Image Retrieval Techniques to Provide New Services for Tourism Industry

    Zobeir Raisi

    2014-09-01

    Full Text Available The aim of this paper is to use the network and internet and also to apply the content based image retrieval techniques to provide new services for tourism industry. The assumption is a tourist faces an interesting subject; he or she can take an image of subject by a handheld device and send it to the server as query image of CBIR. In the server, images similar to the query are retrieved and results are returned to the handheld device to be shown on a web browser. Then, the tourist can access the useful information about the subject by clicking on one of the retrieved images. For this purpose, a tourism database is created. Then several particular content-based image retrieval techniques are selected and applied to the database. Among these techniques, ‘Edge Histogram Descriptor (EHD’ and ‘Color layout descriptor (CLD’ algorithms have better retrieval performances than the others. By combining and modification of these two methods, a new CBIR algorithm is proposed for this application. Simulation results show a high retrieval performance for the proposed algorithm.

  15. Action research for climate change adaptation : Developing and applying knowledge for governance

    Buuren, van A.; Eshuis, J.; Vliet, van M.

    2015-01-01

    Governments all over the world are struggling with the question of how to adapt to climate change. They need information not only about the issue and its possible consequences, but also about feasible governance strategies and instruments to combat it. At the same time, scientists from different soc

  16. Automatic Domain Adaptation of Word Sense Disambiguation Based on Sublanguage Semantic Schemata Applied to Clinical Narrative

    Patterson, Olga

    2012-01-01

    Domain adaptation of natural language processing systems is challenging because it requires human expertise. While manual effort is effective in creating a high quality knowledge base, it is expensive and time consuming. Clinical text adds another layer of complexity to the task due to privacy and confidentiality restrictions that hinder the…

  17. Adaptive bone-remodeling theory applied to prosthetic-design analysis

    R. Huiskes (Rik); H.H. Weinans (Harrie); H.J. Grootenboer; M. Dalstra; B. Fudala; T.J. Slooff

    1987-01-01

    textabstractThe subject of this article is the development and application of computer-simulation methods to predict stress-related adaptive bone remodeling, in accordance with 'Wolff's Law'. These models are based on the Finite Element Method (FEM) in combination with numerical formulations of adap

  18. ADAPTING E-COURSES USING DATA MINING TECHNIQUES - PDCA APPROACH AND QUALITY SPIRAL

    Marija Blagojevic

    2013-09-01

    Full Text Available This paper presents an approach to adapting e-courses based on original PDCA (Plan, Do, Check , Act platform and quality spiral. An algorithm for the adaptation of e-courses was proposed and implemented into the Moodle Learning Management System at the Faculty of Technical Sciences, Cacak. The approach is primarily based on improving LMS (Learning Management Systems or e-learning systems through modifying the electronic structure of the courses by predicting the behaviour patterns of the users. The prediction of user behaviour patterns was done using data mining techniques. Future research will focus on modelling of excellence of continuous advancement of the original system based on the evaluation results carried out at the end of each PDCA cycle. Additionally, future work will aim at evaluating the effects of the system based on the achievements and positive feedback of the users.

  19. A spectral identification technique for adaptive attitude control and pointing of the Space Telescope

    Teuber, D. L.

    1976-01-01

    The Space Telescope is a 2.4 m class aperture optical telescope having near-diffraction-limited performance. It will be placed into earth orbit by 1980 via the Space Shuttle. The problem considered is how to achieve negligible degradation of the astronomy imaging capability (to 0.005 arc second) due to smearing by pointing motions during observations. Initially, pointing instability sources were identified and a linear stability was used to assess the magnitude of elastic body modes and to design control system compensation regions necessary for subsequent adaptive control. A spectral identification technique for this adaptive attitude control and pointing has been investigated that will alleviate requirements for comprehensive dynamic ground testing. Typical all-digital simulation results describing motions of the telescope line of sight are presented.

  20. An adaptive p-refinement strategy applied to nodal expansion method in 3D Cartesian geometry

    Highlights: • An adaptive p-refinement approach is developed and implemented successfully in ACNEM. • The proposed strategy enhances the accuracy with regard to the uniform zeroth order solution. • Improvement of results is gained by less computation time relative to uniform high order solution. - Abstract: The aim of this work is to develop a coarse mesh treatment strategy using adaptive polynomial, p, refinement approach for average current nodal expansion method in order to solve the neutron diffusion equation. For performing the adaptive solution process, a posteriori error estimation scheme, i.e. flux gradient has been utilized for finding the probable numerical errors. The high net leakage in a node represents flux gradient existence between neighbor nodes and it may indicate the source of errors for the coarse mesh calculation. Therefore, the relative Cartesian directional net leakage of nodes is considered as an assessment criterion for mesh refinement in a sub-domain. In our proposed approach, the zeroth order nodal expansion solution is used along coarse meshes as large as fuel assemblies to treat neutron populations. Coarse nodes with high directional net leakage may be chosen for implementing higher order polynomial expansion in the corresponding direction, i.e. X and/or Y and/or Z Cartesian directions. Using this strategy, the computational cost and time are reduced relative to uniform high order polynomial solution. In order to demonstrate the efficiency of this approach, a computer program, APNEC, Adaptive P-refinement Nodal Expansion Code, has been developed for solving the neutron diffusion equation using various orders of average current nodal expansion method in 3D rectangular geometry. Some well-known benchmarks are investigated to compare the uniform and adaptive solutions. Results demonstrate the superiority of our proposed strategy in enhancing the accuracy of solution without using uniform high order solution throughout the domain and

  1. Adaptive MIMO Fuzzy Compensate Fuzzy Sliding Mode Algorithm: Applied to Second Order Nonlinear System

    Farzin Piltan, N. Sulaiman, Payman Ferdosali, Mehdi Rashidi, Zahra Tajpeikar

    2011-12-01

    Full Text Available This research is focused on proposed adaptive fuzzy sliding mode algorithms with the adaptation lawsderived in the Lyapunov sense. The stability of the closed-loop system is proved mathematically based onthe Lyapunov method. Adaptive MIMO fuzzy compensate fuzzy sliding mode method design a MIMO fuzzysystem to compensate for the model uncertainties of the system, and chattering also solved by linearsaturation method. Since there is no tuning method to adjust the premise part of fuzzy rules so wepresented a scheme to online tune consequence part of fuzzy rules. Classical sliding mode control isrobust to control model uncertainties and external disturbances. A sliding mode method with a switchingcontrol low guarantees the stability of the certain and/or uncertain system, but the addition of the switchingcontrol low introduces chattering into the system. One way to reduce or eliminate chattering is to insert aboundary layer method inside of a boundary layer around the sliding surface. Classical sliding modecontrol method has difficulty in handling unstructured model uncertainties. One can overcome this problemby combining a sliding mode controller and artificial intelligence (e.g. fuzzy logic. To approximate a timevaryingnonlinear dynamic system, a fuzzy system requires a large amount of fuzzy rule base. This largenumber of fuzzy rules will cause a high computation load. The addition of an adaptive law to a fuzzy slidingmode controller to online tune the parameters of the fuzzy rules in use will ensure a moderatecomputational load. The adaptive laws in this algorithm are designed based on the Lyapunov stabilitytheorem. Asymptotic stability of the closed loop system is also proved in the sense of Lyapunov.

  2. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids

    Changho Jhin; Keum Taek Hwang

    2015-01-01

    One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were...

  3. The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.

    Hachaj, Tomasz; Ogiela, Marek R

    2016-06-01

    The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment. PMID:27106581

  4. Applied methods and techniques for mechatronic systems modelling, identification and control

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  5. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  6. Guidelines for depth data collection in rivers when applying interpolation techniques (kriging for river restoration

    M. Rivas-Casado

    2007-05-01

    Full Text Available River restoration appraisal requires the implementation of monitoring programmes that assess the river site before and after the restoration project. However, little work has yet been developed to design effective and efficient sampling strategies. Three main variables need to be considered when designing monitoring programmes: space, time and scale. The aim of this paper is to describe the methodology applied to analyse the variation of depth in space, scale and time so more comprehensive monitoring programmes can be developed. Geostatistical techniques were applied to study the spatial dimension (sampling strategy and density, spectral analysis was used to study the scale at which depth shows cyclic patterns, whilst descriptive statistics were used to assess the temporal variation. A brief set of guidelines have been summarised in the conclusion.

  7. The two-step electrochemical etching technique applied for polycarbonate track etched detectors

    The two-step electrochemical etching technique was optimized by varying the electrical field strength and applied to the polycarbonate track detector, Makrofol DE, for neutron dosimetry and radon monitoring using an electric field strength of 26.7 kV·cm-1.In comparison with the previously applied combination of chemical and electrochemical etching, the neutron response was increased, above a threshold energy of about 1.5 MeV, by a factor of 3 to a value of 52 tracks·cm-1·mSv-1. The background track density and its standard deviation of (6±2) cm-2 allows the detection of about 0.1 mSv. The alpha energy range was extended from an alpha window of about 1.5 MeV to an alpha energy range of 0.5 to 4 MeV. (author)

  8. Investigation of finite difference recession computation techniques applied to a nonlinear recession problem

    This report presents comparisons of results of five implicit and explicit finite difference recession computation techniques with results from a more accurate ''benchmark'' solution applied to a simple one-dimensional nonlinear ablation problem. In the comparison problem a semi-infinite solid is subjected to a constant heat flux at its surface and the rate of recession is controlled by the solid material's latent heat of fusion. All thermal properties are assumed constant. The five finite difference methods include three front node dropping schemes, a back node dropping scheme, and a method in which the ablation problem is embedded in an inverse heat conduction problem and no nodes are dropped. Constancy of thermal properties and the semiinfinite and one-dimensional nature of the problem at hand are not necessary assumptions in applying the methods studied to more general problems. The best of the methods studied will be incorporated into APL's Standard Heat Transfer Program

  9. Contact Nd:YAG Laser Technique Applied To Head And Neck Reconstructive Surgery

    Nobori, Takuo; Miyazaki, Yasuhiro; Moriyama, Ichiro; Sannikorn, Phakdee; Ohyama, Masaru

    1989-09-01

    The contact Nd:YAG laser system with ceramics tip was applied to head and neck reconstructive surgery. Plastic surgery was performed in 78 patients with head and neck diseases during the past 11 years. Since 1984 reconstructive surgery in these patients was made on 60 cases and on 45 cases(75%) of these cases the contact Nd:YAG laser surgery was used. Using this laser technique, half volume of bleeding in the operation was obtained as compared with that of the conventional procedure.

  10. Grid-based Moment Tensor Inversion Technique Apply for Earthquakes Offshore of Northeast Taiwan

    Cheng, H.; Lee, S.; Ma, K.

    2010-12-01

    We use a grid-based moment tensor inversion technique and broadband continuous recordings to real-time monitoring the earthquakes offshore northeast Taiwan. The moment tensor inversion technique and a grid search scheme are applied to obtain the information of source parameters, including the hypocenter, moment magnitude, and focal mechanism. In Taiwan, the routine moment tensor solutions are reported by CWB(Central Weather Bureau) and BATS(Broadband Array in Taiwan for Seismology) which both require some lag time for the information on event time and location before doing CMT(Centroid Moment Tensor) analysis. By using the Grid-based moment tensor inversion technique, the event location and focal mechanism could be obtained simultaneously within about two minutes after the occurrence of the earthquake. This inversion procedure is based on a 1-D Green’s functions database calculated by frequency-wavenumber(fk) method. The northeast offshore of Taiwan has been taken into account as our first test area which covers the region of 121.5E to 123E, 23.5N to 25N, and the depth to 136 km. A 3D grid system is set in this study area with average grid size of 10 x 10 x 10 km3. We compare our results with the past earthquakes from 2008 to 2010 which had analyzed by BATS CMT. We also compare the event time detected by GridMT with the CWB earthquake reports. The results indicate that the grid-based moment tensor inversion system is efficient and realizable to be applied real-time on monitoring the local seismic activity. Our long-term goal is to use the GridMT technique with fully 3-D Green’s functions for the whole Taiwan in the future.

  11. The Service Laboratory - A GTZ-BgVV project: Health protection through adapted veterinary diagnostic techniques

    The customary diagnostic methods of today have been developed in industrialized countries. High costs for personnel resulted in a trend towards automation and prefabricated test kits. Consequently, these techniques are not sufficiently adapted to local conditions in developing countries, where, as a rule, skilled and ancillary staff is available whereas foreign currency reserves for purchasing laboratory equipment and material from abroad are rather limited. Furthermore, the training of personnel from developing countries has usually been oriented towards thenon-transferable standards and methods of industrialized countries. This leads to a long term dependence of the diagnostic services on external funding. A diagnostic technology adapted to the specific local conditions of developing countries is needed to overcome this situation. The project activities concentrate on serological diagnostic work. Here, basic knowledge of the common diagnostic techniques and their set-up for specific diseases, methods for the production of related reagents (antigens, antibodies, conjugates, complement, etc.) and cleaning procedures for the reuse of 'one way' plastic material is spread by training programmes, specific publications and information leaflets. For two of the more complex test procedures, the most frequently quoted prescribed test for international trade, CFT, and the increasingly important ELISA (OIE, Manual of Standards for Diagnostic Techniques, Paris, 1992), we have calculated the cost reduction potential of adaptation through self-production of reagents and reuse of plastic materials. Material costs per microtitre test plate for the diagnosis of brucellosis can be reduced from US $3.79 to 0.82 for CFT and from US $3.88 to 1.13 for ELISA. In comparison, commercial ELISA kits cost about US $80 to 90 per plate (e.g. Bommeli, IDEXX, Boehringer)

  12. Applying computer adaptive testing to optimize online assessment of suicidal behavior: a simulation study.

    Beurs, D.P. de; Vries, A.L.M. de; Groot, M.H. de; Keijser, J. de; Kerkhof, A.J.F.M.

    2014-01-01

    Background The Internet is used increasingly for both suicide research and prevention. To optimize online assessment of suicidal patients, there is a need for short, good-quality tools to assess elevated risk of future suicidal behavior. Computer adaptive testing (CAT) can be used to reduce response burden and improve accuracy, and make the available pencil-and-paper tools more appropriate for online administration. Objective The aim was to test whether an item response–based computer adaptiv...

  13. The Subarray MVDR Beamformer: A Space-Time Adaptive Processor Applied to Active Sonar

    Bezanson, Leverett Guidroz

    The research for this thesis was mainly performed at the NATO Underwater Research Center, now named the Center for Maritime Research and Experimentation (CMRE). The purpose of the research was to improve the detection of underwater targets in the littoral ocean when using active sonar. Currently these detections are being made by towed line arrays using a delay and sum beamformer for bearing measurements and noise suppression. This method of beamforming has can suffer from reverberation that commonly is present in the littoral environment. A proposed solution is to use an adaptive beamformer which can attenuate reverberation and increase the bearing resolution. The adaptive beamforming algorithms have existed for a long time and typically are not used in the active case due to limited amount of observable data that is needed for adaptation. This deficiency is caused by the conflicting requirements for high Doppler resolution for target detection and small time windows for building up full-rank covariance estimates. The algorithms also are sensitive to bearing estimate errors that commonly occur in active sonar systems. Recently it has been proposed to overcome these limitations through the use of reduced beamspace adaptive beamforming. The Subarray MVDR beamformer is analyzed, both against simulated data and against experimental data collected by CMRE during the GLINT/NGAS11 experiment in 2011. Simulation results indicate that the Subarray MVDR beamformer rejects interfering signals that are not effectively attenuated by conventional beamforming. The application of the Subarray MVDR beamformer to the experimental data shows that the Doppler spread of the reverberation ridge is reduced, and the bearing resolution improved. The signal to noise ratio is calculated at the target location and also shows improvement. These calculated and observed performance metrics indicate an improvement of detection in reverberation noise.

  14. Applying Agile Requirements Engineering Approach for Re-engineering & Changes in existing Brownfield Adaptive Systems

    Masood, Abdullah; Ali, M. Asim

    2014-01-01

    Requirements Engineering (RE) is a key activity in the development of software systems and is concerned with the identification of the goals of stakeholders and their elaboration into precise statements of desired services and behavior. The research describes an Agile Requirements Engineering approach for re-engineering & changes in existing Brownfield adaptive system. The approach has few modifications that can be used as a part of SCRUM development process for re-engineering & changes. The ...

  15. Adaptively Reevaluated Bayesian Localization (ARBL): A novel technique for radiological source localization

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.; McCall, Jonathon D.; Prinke, Amanda M.; Webster, Jennifer B.; Seifert, Carolyn E.

    2015-06-01

    We present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. This technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry of response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search.

  16. Adaptively Reevaluated Bayesian Localization (ARBL). A Novel Technique for Radiological Source Localization

    Miller, Erin A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Robinson, Sean M. [Pacific Northwest National Lab. (PNNL), Seattle, WA (United States); Anderson, Kevin K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCall, Jonathon D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Prinke, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Webster, Jennifer B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Seifert, Carolyn E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-19

    Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry of response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search

  17. Adaptively Reevaluated Bayesian Localization (ARBL): A novel technique for radiological source localization

    We present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. This technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry of response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search

  18. Applying contact to individual silicon nanowires using a dielectrophoresis (DEP)-based technique

    One major challenge for the technological use of nanostructures is the control of their electrical and optoelectronic properties. For that purpose, extensive research into the electrical characterization and therefore a fast and reliable way of contacting these structures are needed. Here, we report on a new, dielectrophoresis (DEP)-based technique, which enables to apply sufficient and reliable contact to individual nanostructures, like semiconducting nanowires (NW), easily and without the need for lithography. The DEP contacting technique presented in this article can be done without high-tech equipment and monitored in situ with an optical microscope. In the presented experiments, individual SiNWs are trapped and subsequently welded between two photolithographically pre-patterned electrodes by applying varying AC voltages to the electrodes. To proof the quality of these contacts, I–V curves, photoresponse and photoconductivity of a single SiNW were measured. Furthermore, the measured photoconductivity in dependence on the wavelength of illuminated light and was compared with calculations predicting the absorption spectra of an individual SiNW.

  19. Applying contact to individual silicon nanowires using a dielectrophoresis (DEP)-based technique

    Leiterer, Christian, E-mail: christian.leiterer@gmail.com [Institute of Photonic Technology (Germany); Broenstrup, Gerald [Max-Planck-Institute for the Science of Light (Germany); Jahr, Norbert; Urban, Matthias; Arnold, Cornelia; Christiansen, Silke; Fritzsche, Wolfgang [Institute of Photonic Technology (Germany)

    2013-05-15

    One major challenge for the technological use of nanostructures is the control of their electrical and optoelectronic properties. For that purpose, extensive research into the electrical characterization and therefore a fast and reliable way of contacting these structures are needed. Here, we report on a new, dielectrophoresis (DEP)-based technique, which enables to apply sufficient and reliable contact to individual nanostructures, like semiconducting nanowires (NW), easily and without the need for lithography. The DEP contacting technique presented in this article can be done without high-tech equipment and monitored in situ with an optical microscope. In the presented experiments, individual SiNWs are trapped and subsequently welded between two photolithographically pre-patterned electrodes by applying varying AC voltages to the electrodes. To proof the quality of these contacts, I-V curves, photoresponse and photoconductivity of a single SiNW were measured. Furthermore, the measured photoconductivity in dependence on the wavelength of illuminated light and was compared with calculations predicting the absorption spectra of an individual SiNW.

  20. Adaptive Finite Element Modeling Techniques for the Poisson-Boltzmann Equation

    Holst, Michael; Yu, Zeyun; Zhou, Yongcheng; Zhu, Yunrong

    2010-01-01

    We develop an efficient and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the regularization technique of Chen, Holst, and Xu; this technique made possible the first a priori pointwise estimates and the first complete solution and approximation theory for the Poisson-Boltzmann equation. It also made possible the first provably convergent discretization of the PBE, and allowed for the development of a provably convergent AFEM for the PBE. However, in practice the regularization turns out to be numerically ill-conditioned. In this article, we examine a second regularization, and establish a number of basic results to ensure that the new approach produces the same mathematical advantages of the original regularization, without the ill-conditioning property. We then design an AFEM scheme based on the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This res...

  1. Blind Adaptive Subcarrier Combining Technique for MC-CDMA Receiver in Mobile Rayleigh Channel

    Shakya, Indu; Stipidis, Elias

    2011-01-01

    A new subcarrier combining technique is proposed for MC -CDMA receiver in mobile Rayleigh fading channel. It exploits the structure formed by repeating spreading sequences of users on different subcarriers to simultaneously suppress multiple access interference (MAI) and provide implicit channel tracking without any knowledge of the channel amplitudes or training sequences. This is achieved by adaptively weighting each subcarrier in each symbol period by employing a simple gradient descent algorithm to meet the constant modulus (CM) criterion with judicious selection of step-size. Improved BER and user capacity performance are shown with similar complexity in order of O(N) compared with conventional maximum ratio combining and equal gain combining techniques even under high channel Doppler rates.

  2. Interferometric Techniques Apply to Gemona (Friuli-Italy) Area as Tool for Structural Analysis.

    Sternai, P.; Calcagni, L.; Crippa, B.

    2009-04-01

    Interferometric Techniques Apply to Gemona (Friuli) Area as Tool for Structural Analysis. We suggest a possible exploitation of radar interferometry for estimating many features of the brittle deformation occurring at the very surface of the Earth, such as, for example, the length of the dislocation front, the total amount of the dislocation, the dislocation rate over the time interval considered. The Interferometric techniques allows obtaining highly reliable vertical velocity values of the order of 1 mm/yr, with a maximum resolution of 80m2. The values obtained always refer to the temporal interval considered, which depends on the availability of SAR images. We demonstrate that is possible to see the evolution and the behaviour of the main tectonic lineament of the considered area even on short period of time (few years). We describe the results of a procedure to calculate terrain motion velocity on highly correlated pixels of an area nearby Gemona - Friuli, Northern Italy, and then we presented some considerations, based on three successful examples of the analysis, on how to exploit these results in a structural-geological description of the area. The versatility of the technique, the large dimensions of the area that can be analyzed (10.000 km2), and the high precision and reliability of the results obtained, make radar interferometry a powerful tool not only to monitor the dislocation occurring at the surface, but also to obtain important information on the structural evolution of mountain belts, otherwise very difficult to recognize.

  3. Photothermal Techniques Applied to the Thermal Characterization of l-Cysteine Nanofluids

    Alvarado, E. Maldonado; Ramón-Gallegos, E.; Jiménez Pérez, J. L.; Cruz-Orea, A.; Hernández Rosas, J.

    2013-05-01

    Thermal-diffusivity ( D) and thermal-effusivity ( e) measurements were carried out in l-cysteine nanoliquids l-cysteine in combination with Au nanoparticles and protoporphyrin IX (PpIX) nanofluid) by using thermal lens spectrometry (TLS) and photopyroelectric (PPE) techniques. The TLS technique was used in the two mismatched mode experimental configuration to obtain the thermal-diffusivity of the samples. On the other hand, the sample thermal effusivity ( e) was obtained by using the PPE technique where the temperature variation of a sample, exposed to modulated radiation, is measured with a pyrolectric sensor. From the obtained thermal-diffusivity and thermal-effusivity values, the thermal conductivity and specific heat capacity of the sample were calculated. The obtained thermal parameters were compared with the thermal parameters of water. The results of this study could be applied to the detection of tumors by using the l-cysteine in combination with Au nanoparticles and PpIX nanofluid, called conjugated in this study.

  4. Micropillar compression technique applied to micron-scale mudstone elasto-plastic deformation.

    Michael, Joseph Richard; Chidsey, Thomas (Utah Geological Survey, Salt Lake City, UT); Heath, Jason E.; Dewers, Thomas A.; Boyce, Brad Lee; Buchheit, Thomas Edward

    2010-12-01

    Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate.

  5. Innovative image processing techniques applied to the thermographic inspection of PFC with SATIR facility

    The components used in fusion devices, especially high heat flux Plasma Facing Components (PFC), have to withstand heat fluxes in the range of 10-20 MW/m2. So, they require high reliability which can be only guaranteed by accurate Non Destructive Examinations (NDE). The SATIR test bed operating at Commissariat a l'Energie Atomique (CEA) Cadarache performs NDE using transient infrared thermography sequence which compares the thermal response of a tested element to a Reference element assumed to be defect free. The control parameter is called DTrefmax. In this paper, we present two innovative image processing techniques of the SATIR signal allowing the qualification of a component without any Reference element. The first method is based on a spatial image autocorrelation and the second on the resolution of an Inverse Heat Conduction Problem (IHCP) using a BEM (Boundary Element Method) technique. After a validation step performed on numerical data, these two methods have been applied to SATIR experimental data. The results show that these two techniques allow accurate defect detection, without using a Reference tile. They can be used in addition to the DTrefmax, for the qualification of plasma facing components.

  6. Applied research on air pollution using nuclear-related analytical techniques

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which will run from 1992-1996, and will build upon the experience gained by the Agency from the laboratory support that it has been providing for several years to BAPMoN - the Background Air Pollution Monitoring Network programme organized under the auspices of the World Meterological Organization. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XFR, and PIXE for the analysis of toxic and other trace elements in suspended particulate matter (including air filter samples), rainwater and fog-water samples, and in biological indicators of air pollution (e.g. lichens and mosses). The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for practically-oriented research and monitoring studies on air pollution ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural areas). This document reports the discussions held during the first Research Co-ordination Meeting (RCM) for the CRP which took place at the IAEA Headquarters in Vienna. Refs, figs and tabs

  7. Adaptive technique for matching the spectral response in skin lesions' images

    Pavlova, P.; Borisova, E.; Pavlova, E.; Avramov, L.

    2015-03-01

    The suggested technique is a subsequent stage for data obtaining from diffuse reflectance spectra and images of diseased tissue with a final aim of skin cancer diagnostics. Our previous work allows us to extract patterns for some types of skin cancer, as a ratio between spectra, obtained from healthy and diseased tissue in the range of 380 - 780 nm region. The authenticity of the patterns depends on the tested point into the area of lesion, and the resulting diagnose could also be fixed with some probability. In this work, two adaptations are implemented to localize pixels of the image lesion, where the reflectance spectrum corresponds to pattern. First adapts the standard to the personal patient and second - translates the spectrum white point basis to the relative white point of the image. Since the reflectance spectra and the image pixels are regarding to different white points, a correction of the compared colours is needed. The latest is done using a standard method for chromatic adaptation. The technique follows the steps below: -Calculation the colorimetric XYZ parameters for the initial white point, fixed by reflectance spectrum from healthy tissue; -Calculation the XYZ parameters for the distant white point on the base of image of nondiseased tissue; -Transformation the XYZ parameters for the test-spectrum by obtained matrix; -Finding the RGB values of the XYZ parameters for the test-spectrum according sRGB; Finally, the pixels of the lesion's image, corresponding to colour from the test-spectrum and particular diagnostic pattern are marked with a specific colour.

  8. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  9. Array model interpolation and subband iterative adaptive filters applied to beamforming-based acoustic echo cancellation.

    Bai, Mingsian R; Chi, Li-Wen; Liang, Li-Huang; Lo, Yi-Yang

    2016-02-01

    In this paper, an evolutionary exposition is given in regard to the enhancing strategies for acoustic echo cancellers (AECs). A fixed beamformer (FBF) is utilized to focus on the near-end speaker while suppressing the echo from the far end. In reality, the array steering vector could differ considerably from the ideal freefield plane wave model. Therefore, an experimental procedure is developed to interpolate a practical array model from the measured frequency responses. Subband (SB) filtering with polyphase implementation is exploited to accelerate the cancellation process. Generalized sidelobe canceller (GSC) composed of an FBF and an adaptive blocking module is combined with AEC to maximize cancellation performance. Another enhancement is an internal iteration (IIT) procedure that enables efficient convergence in the adaptive SB filters within a sample time. Objective tests in terms of echo return loss enhancement (ERLE), perceptual evaluation of speech quality (PESQ), word recognition rate for automatic speech recognition (ASR), and subjective listening tests are conducted to validate the proposed AEC approaches. The results show that the GSC-SB-AEC-IIT approach has attained the highest ERLE without speech quality degradation, even in double-talk scenarios. PMID:26936567

  10. Possibilities of joint application of adaptive optics technique and nonlinear optical phase conjugation to compensate for turbulent distortions

    Lukin, V. P.; Kanev, F. Yu; Kulagin, O. V.

    2016-05-01

    The efficiency of integrating the nonlinear optical technique based on forming a reverse wavefront and the conventional adaptive optics into a unified complex (for example, for adaptive focusing of quasi-cw laser radiation) is demonstrated. Nonlinear optical phase conjugation may provide more exact information about the phase fluctuations in the corrected wavefront in comparison with the adaptive optics methods. At the same time, the conventional methods of adaptive optics provide an efficient control of a laser beam projected onto a target for a rather long time.

  11. Data smoothing techniques applied to proton microprobe scans of teleost hard parts

    We use a proton microprobe to examine the distribution of elements in otoliths and scales of teleost (bony) fish. The elements of principal interest are calcium and strontium in otoliths and calcium and fluorine in scales. Changes in the distribution of these elements across hard structures may allow inferences about the life histories of fish. Otoliths and scales of interest are up to a centimeter in linear dimension and to reveal the structures of interest up to 200 sampling points are required in each dimension. The time needed to accumulate high X-ray counts at each sampling point can be large, particularly for strontium. To reduce microprobe usage we use data smoothing techniques to reveal changing patterns with modest X-ray count accumulations at individual data points. In this paper we review performance for revealing pattern at modest levels of X-ray count accumulations of a selection of digital filters (moving average smoothers), running median filters, robust locally weighted regression filters and adaptive spline filters. (author)

  12. ''Cloud in Cell'' technique applied to the roll up of vortex sheets

    The problem of the roll up of a two dimensional vortex sheet generated by a wing in an ideal fluid is phrased in terms of the streamfunction and the vortex sheet strength. A numerical method is used to calculate the time evolution of the vortex sheet by adapting the ''Cloud In Cell'' technique introduced in solving many particle simulations in plasma physics (see J. P. Christiansen, J. Computational Physics 13 (1973)). Two cases are considered for the initial distribution of circulation, one corresponding to an elliptically loaded wing and the other simulating the wing with a flap deployed. Results indicate that small scale behaviour plays an important part in the roll up. Typically, small scale perturbations result in small structures which evolve into ever increasing larger structures by vortex amalgamation. Conclusions are given from a number of tests exploring the validity of the method. Briefly, small scale perturbations are introduced artificially by the grid; but once the process of vortex amalgamation is well underway, the emerging large scale behaviour is relatively insensitive to the precise details of the initial perturbations. Since clearly defined structures result from the application of this method, it promises to aid considerably in understanding the behaviour of vortex wakes

  13. A Study of Advanced Modern Control Techniques Applied to a Twin Rotor MIMO System

    Phillips, Andrew E.

    The twin rotor MIMO system (TRMS) is a helicopter-like system that is restricted to two degrees of freedom, pitch and yaw. It is a complicated nonlinear, coupled, MIMO system used for the verification of control methods and observers. There have been many methods successfully applied to the system ranging from simple proportional integral derivative (PID) controllers, to machine learning algorithms, nonlinear control methods and other less explored methods like deadbeat control and various optimal methodologies. This thesis details the design procedure for two different control methods. The first is a suboptimal tracking controller using a linear quadratic regulator (LQR) with integral action. The second is the design of several adaptive sliding mode controller to provide robust tracking control of the TRMS. Once the design is complete the controllers are tested in simulation and their performance is compared against a PID controller experimentally. The performance of the controllers are also compared against other controllers in the literature. The ability of the sliding mode controllers (SMC) to suppress chattering is also be explored.

  14. Strategy for applying scaling technique to water retention curves of forest soils

    Hayashi, Y.; Kosugi, K.; Mizuyama, T.

    2009-12-01

    Describing the infiltration of water in soils on a forested hillslope requires the information of spatial variability of water retention curve (WRC). By using a scaling technique, Hayashi et al. (2009), found that the porosity mostly characterizes the spatial variability of the WRCs on a forested hillslope. This scaling technique was based on a model, which assumes a lognormal pore size distribution and contains three parameters: the median of log-transformed pore radius, ψm, the variance of log-transformed pore radius, σ, and the effective porosity, θe. Thus, in the scaling method proposed by Hayashi et al. (2009), θe is a scaling factor, which should be determined for each individual soil, and that ψm and σ are reference parameter common for the whole data set. They examined this scaling method using θe calculated as a difference between the observed saturated water content and water content observed at ψ = -1000 cm for each sample and, ψm and σ derived from the whole data set of WRCs on the slope. Then it was showed that this scaling method could explain almost 90 % of the spatial variability in WRCs on the forested hillslope. However, this method requires the whole data set of WRCs for deriving the reference parameters (ψm and σ). For applying the scaling technique more practically, in this study, we tested a scaling method using the reference parameter derived from the WRCs at a small part of the slope. In order to examine the proposed scaling method, the WRCs for the 246 undisturbed forest soil samples, collected at 15 points distributed from downslope to upslope segments, were observed. In the proposed scaling method, we optimized the common ψm and σ to the WRCs for six soil samples, collected at one point on the middle-slope, and applied these parameters to a reference parameter for the whole data sets. The scaling method proposed by this study exhibited an increase of only 6 % in the residual sum of squares as compared with that of the method

  15. ADAPTIVE HARMONIC CANCELLATION APPLIED IN ELECTRO-HYDRAULIC SERVO SYSTEM WITH ANN

    Yao Jianjun; Wu Zhenshun; Han Junwei; Yue Donghai

    2004-01-01

    The method for harmonic cancellation based on artificial neural network (ANN) is proposed. The task is accomplished by generating reference signal with frequency that should be eliminated from the output. The reference input is weighted by the ANN in such a way that it closely matches the harmonic. The weighted reference signal is added to the fundamental signal such that the output harmonic is cancelled leaving the desired signal alone. The weights of ANN are adjusted by output harmonic, which is isolated by a bandpass filter. The above concept is used as a basis for the development of adaptive harmonic cancellation (AHC) algorithm. Simulation results performed with a hydraulic system demonstrate the efficiency and validity of the proposed AHC control scheme.

  16. Optimizing Power and Buffer Congestion on Wireless Sensor Nodes Using CAP (Coordinated Adaptive Power Management Technique

    Gauri Joshi

    2011-05-01

    Full Text Available Limited hardware capabilities and very limited battery power supply are the two main constraints thatarise because of small size and low cost of the wireless sensor nodes. Power optimization is highlydesired at all the levels in order to have a long lived Wireless Sensor Network (WSN. Prolonging the lifespan of the network is the prime focus in highly energy constrained wireless sensor networks. Sufficientnumber of active nodes can only ensure proper coverage of the sensing field and connectivity of thenetwork. If large number of wireless sensor nodes get their batteries depleted over a short time span thenit is not possible to maintain the network. In order to have long lived network it is mandatory to havelong lived sensor nodes and hence power optimization at node level becomes equally important as poweroptimization at network level. In this paper need for a dynamically adaptive sensor node is signified inorder to optimize power at individual nodes along with the reduction in data loss due to buffercongestion.We have analyzed a sensor node with fixed service rate (processing rate and transmission rate and asensor node with variable service rates for its power consumption and data loss in small sized buffersunder varying traffic (workload conditions. For variable processing rate Dynamic Voltage FrequencyScaling (DVFS technique is considered and for variable transmission rate Dynamic Modulation Scaling(DMS technique is considered. Comparing the results of a dynamically adaptive sensor node with thatof a fixed service rate sensor node shows improvement in the lifetime of node as well as reduction in thedata loss due to buffer congestion. Further we have tried to coordinate the service rates of computationunit and communication unit on a sensor node which give rise to Coordinated Adaptive Power (CAPmanagement. The main objective of CAP Management is to save the power during normal periods andreduce the data loss due to buffer congestion (overflow

  17. Solar coronal magnetic fields derived using seismology techniques applied to omnipresent sunspot waves

    Jess, D B; Ryans, R S I; Christian, D J; Keys, P H; Mathioudakis, M; Mackay, D H; Prasad, S Krishna; Banerjee, D; Grant, S D T; Yau, S; Diamond, C

    2016-01-01

    Sunspots on the surface of the Sun are the observational signatures of intense manifestations of tightly packed magnetic field lines, with near-vertical field strengths exceeding 6,000 G in extreme cases. It is well accepted that both the plasma density and the magnitude of the magnetic field strength decrease rapidly away from the solar surface, making high-cadence coronal measurements through traditional Zeeman and Hanle effects difficult since the observational signatures are fraught with low-amplitude signals that can become swamped with instrumental noise. Magneto-hydrodynamic (MHD) techniques have previously been applied to coronal structures, with single and spatially isolated magnetic field strengths estimated as 9-55 G. A drawback with previous MHD approaches is that they rely on particular wave modes alongside the detectability of harmonic overtones. Here we show, for the first time, how omnipresent magneto-acoustic waves, originating from within the underlying sunspot and propagating radially outwa...

  18. A comparative assessment of texture analysis techniques applied to bone tool use-wear

    Watson, Adam S.; Gleason, Matthew A.

    2016-06-01

    The study of bone tools, a specific class of artifacts often essential to perishable craft production, provides insight into industries otherwise largely invisible archaeologically. Building on recent breakthroughs in the analysis of microwear, this research applies confocal laser scanning microscopy and texture analysis techniques drawn from the field of surface metrology to identify use-wear patterns on experimental and archaeological bone artifacts. Our approach utilizes both conventional parameters and multi-scale geometric characterizations of the areas of worn surfaces to identify statistical similarities as a function of scale. The introduction of this quantitative approach to the study of microtopography holds significant potential for advancement in use-wear studies by reducing inter-observer variability and identifying new parameters useful in the detection of differential wear-patterns.

  19. A systematic review of applying modern software engineering techniques to developing robotic systems

    Claudia Pons

    2012-04-01

    Full Text Available Robots have become collaborators in our daily life. While robotic systems become more and more complex, the need to engineer their software development grows as well. The traditional approaches used in developing these software systems are reaching their limits; currently used methodologies and tools fall short of addressing the needs of such complex software development. Separating robotics’ knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. This paper presents a systematic review (SLR of the current use of modern software engineering techniques for developing robotic software systems and their actual automation level. The survey was aimed at summarizing existing evidence concerning applying such technologies to the field of robotic systems to identify any gaps in current research to suggest areas for further investigation and provide a background for positioning new research activities.

  20. Technique of uranium exploration in tropical rain forests as applied in Sumatra and other tropical areas

    The technique of uranium prospecting in areas covered by tropical rain forest is discussed using a uranium exploration campaign conducted from 1976 to 1978 in Western Sumatra as an example. A regional reconnaissance survey using stream sediment samples combined with radiometric field measurements proved ideal for covering very large areas. A mobile field laboratory was used for the geochemical survey. Helicopter support in diffult terrain was found to be very efficient and economical. A field procedure for detecting low uranium concentrations in stream water samples is described. This method has been successfully applied in Sarawak. To distinguish meaningful uranium anomalies in water from those with no meaning for prospecting, the correlations between U content and conductivity of the water and between U content and Ca and HCO3 content must be considered. This method has been used successfully in a geochemical survey in Thailand. (author)

  1. Applying Multi-Criteria Decision-Making Techniques to Prioritize Agility Drivers

    Ahmad Jafarnejad

    2013-07-01

    Full Text Available It seems that to recognize and classify the factors affecting organizational agility and need to specify the amount of their importance for the organization is essential to preserve survival and success in today's environment. This paper reviews the concept of agility and its division in the following indicators included the factors of motivations organizational agility that have been ranked in terms of level of importance and their influence by the techniques of MCDM. The inner complexity, suppliers, competition, customer needs, market, technology and social factors are the most important factors affecting organizational agility that can evaluate the following indicators and apply them and re-engineering processes, reviews and predictions of customer needs and better understanding of competitive environment and supply chain specify organizational agility and success ultimately.

  2. How Can Synchrotron Radiation Techniques Be Applied for Detecting Microstructures in Amorphous Alloys?

    Gu-Qing Guo

    2015-11-01

    Full Text Available In this work, how synchrotron radiation techniques can be applied for detecting the microstructure in metallic glass (MG is studied. The unit cells are the basic structural units in crystals, though it has been suggested that the co-existence of various clusters may be the universal structural feature in MG. Therefore, it is a challenge to detect microstructures of MG even at the short-range scale by directly using synchrotron radiation techniques, such as X-ray diffraction and X-ray absorption methods. Here, a feasible scheme is developed where some state-of-the-art synchrotron radiation-based experiments can be combined with simulations to investigate the microstructure in MG. By studying a typical MG composition (Zr70Pd30, it is found that various clusters do co-exist in its microstructure, and icosahedral-like clusters are the popular structural units. This is the structural origin where there is precipitation of an icosahedral quasicrystalline phase prior to phase transformation from glass to crystal when heating Zr70Pd30 MG.

  3. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the kSP method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results

  4. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  5. The Effect of Applying Critical Thinking Techniques on Students’ Attitudes towards Literature

    Mansoor Fahim

    2013-01-01

    Full Text Available This study investigated the effect of implicit teaching of critical thinking and its practice on the attitude the participants hold towards the subject matter being taught. For the observation of the practicality of critical thinking in altering students’ attitudes, 25 Iranian EFL college students  -16 girls and 9 boys- were selected as the participants of this study, and the application of critical thinking techniques was operationalized during their English Literature course. A 20-item questionnaire was devised in order to measure the participants’ attitudes towards literature prior to the beginning of the intervention and the same questionnaire was used after the completion of the experiment in order to examine probable differences in their attitudes towards the taught subject. Throughout the course, some promoted techniques by critical thinking advocates including identifying arguments, detecting evidence in its support, reasoning for held stands, and forming analyses were applied for 12 sessions. Statistical calculation of a paired samples t-test after the treatment indicted a significance increase in the participants’ positive attitudes towards literature. The findings of this study are believed to be useful in encouraging the inclusion of critical pedagogies in academic systems for the goal of creating interest in students towards the subject matter.Keywords: critical thinking, critical pedagogy, English literature, group discussion

  6. Isotope and tracer techniques applied to groundwater investigations in the municipality of Araguari, MG, Brazil

    During the years 2004-2005 an investigation was carried out by CDTN and UFMG in the western border of the state of Minas Gerais, Brazil, aimed at assessing the water resources related to the Guarani Aquifer System in the region of Araguari. The project was supported by the Fund of Universities (BNPPW/OAS) and other donors and was designed to cover a whole hydrological year. The main water supply for domestic, industrial and agricultural consumption derives from groundwater extraction, and the hydrologic system is under permanent stress. Among other classical tools, isotopic techniques were used to study groundwater origin, recharge processes, transit times and infiltration rates. For such an accomplishment 51 water samples were analyzed for their stable (deuterium and oxygen-18) and radioactive (tritium) environmental isotopic composition. Tracer techniques applying artificial tritium were used to study infiltration rates in selected areas. The overall results show that local waters fit fairly well the GMWL, with a shift in the deuterium excess due to some previous evaporation; also, the groundwater is locally recharged. The exponential model was used to estimate water age, and the results show that most of the water samples (84%) contain young waters with a renewal time up to 30 years. Infiltration rates are high due to local conditions (high pluviosity, plain relief and high permeability of surface soil, and results seem to be in good agreement with figures assessed by means of classical water balance methods. (author)

  7. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  8. Personnel contamination protection techniques applied during the TMI-2 [Three Mile Island Unit 2] cleanup

    The severe damage to the Three Mile Island Unit 2 (TMI-2) core and the subsequent discharge of reactor coolant to the reactor and auxiliary buildings resulted in extremely hostile radiological environments in the TMI-2 plant. High fission product surface contamination and radiation levels necessitated the implementation of innovative techniques and methods in performing cleanup operations while assuring effective as low as reasonably achievable (ALARA) practices. The approach utilized by GPU Nuclear throughout the cleanup in applying protective clothing requirements was to consider the overall health risk to the worker including factors such as cardiopulmonary stress, visual and hearing acuity, and heat stress. In applying protective clothing requirements, trade-off considerations had to be made between preventing skin contaminations and possibly overprotecting the worker, thus impacting his ability to perform his intended task at maximum efficiency and in accordance with ALARA principles. The paper discusses the following topics: protective clothing-general use, beta protection, skin contamination, training, personnel access facility, and heat stress

  9. MULTIVARIATE TECHNIQUES APPLIED TO EVALUATION OF LIGNOCELLULOSIC RESIDUES FOR BIOENERGY PRODUCTION

    Thiago de Paula Protásio

    2013-12-01

    Full Text Available http://dx.doi.org/10.5902/1980509812361The evaluation of lignocellulosic wastes for bioenergy production demands to consider several characteristicsand properties that may be correlated. This fact demands the use of various multivariate analysis techniquesthat allow the evaluation of relevant energetic factors. This work aimed to apply cluster analysis and principalcomponents analyses for the selection and evaluation of lignocellulosic wastes for bioenergy production.8 types of residual biomass were used, whose the elemental components (C, H, O, N, S content, lignin, totalextractives and ashes contents, basic density and higher and lower heating values were determined. Bothmultivariate techniques applied for evaluation and selection of lignocellulosic wastes were efficient andsimilarities were observed between the biomass groups formed by them. Through the interpretation of thefirst principal component obtained, it was possible to create a global development index for the evaluationof the viability of energetic uses of biomass. The interpretation of the second principal component alloweda contrast between nitrogen and sulfur contents with oxygen content.

  10. RECENT ADAPTATIONS OF THE LEGAL STANDARDS APPLIED TO TAX LIABILITIES THROUGH GOVERNMENT ORDINANCES

    Ionel BOSTAN

    2015-01-01

    Our endeavour is directed at revealing certain difficulties identified during the actual process of levying the government revenue that have “lasted” in time, as well as the methods used in solving or merely alleviating such difficulties, as imposed and applied by the Executive authority. Among the above mentioned issues, we will specifically refer to the measures taken to discourage tax payers from using arrears as a source to fund their own activities, with important mentions on the specia...

  11. RECENT ADAPTATIONS OF THE LEGAL STANDARDS APPLIED TO TAX LIABILITIES THROUGH GOVERNMENT ORDINANCES

    Ionel BOSTAN

    2015-01-01

    Our endeavour is directed at revealing certain difficulties identified during the actual process of levying the government revenue that have “lasted” in time, as well as the methods used in solving or merely alleviating such difficulties, as imposed and applied by the Executive authority. Among the above mentioned issues, we will specifically refer to the measures taken to discourage tax payers from using arrears as a source to fund their own activities, with important mentions on the special...

  12. Bioclimatic and vegetation mapping of a topographically complex oceanic island applying different interpolation techniques

    Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José

    2014-07-01

    Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R 2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data.

  13. APPLICATION OF SUBBAND ADAPTIVE THRESHOLDING TECHNIQUE WITH NEIGHBOURHOOD PIXEL FILTERING FOR DENOISING MRI IMAGES

    S. KALAVATHY

    2012-02-01

    Full Text Available The image de-noising naturally corrupted by noise is a classical problem in the field of signal or image processing. Image denoising has become an essential exercise in medical imaging especially the Magnetic Resonance Imaging (MRI..We propose a new method for MRI restoration. Because MR magnitude images suffer from a contrast-reducing signal-dependent bias. Also the noise is often assumed to be white, however a widely used acquisition technique to decrease the acquisition time gives rise to correlated noise. Subband adaptive thresholding technique based on wavelet coefficient along with Neighbourhood Pixel Filtering Algorithm (NPFA for noise suppression of Magnetic Resonance Images (MRI is presented in this paper. Astatistical model is proposed to estimate the noise variance for each coefficient based on the subband using Maximum Likelihood (ML estimator or a Maximum a Posterior (MAP estimator. Also this model describes a new method for suppression of noise by fusing the wavelet denoising technique with optimized thresholding function. This is achieved by including a multiplying factor (α to make the threshold value dependent on decomposition level. By finding Neighbourhood Pixel Difference (NPD and adding NPFA along with subband thresholding the clarity of the image is improved. The filtered value is generated by minimizing NPD and Weighted Mean Square Error (WMSE using method of leastsquare.Areduction in noise pixel is well observedon replacing the optimal weight namely NPFA filter solution with the noisy value of the current pixel. Due to this NPFA filter gains the effect of both high pass and low pass filter. Hence the proposed technique yields significantly superior image quality by preserving the edges, producing a better PSNR value. To confirm the efficiency this is further compared with Median filter, Weiner Filter, Subband thresholding technique along with NPFA filter.

  14. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDIvol) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique

  15. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    Ren, Qingguo, E-mail: renqg83@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Dewan, Sheilesh Kumar, E-mail: sheilesh_d1@hotmail.com [Department of Geriatrics, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Ming, E-mail: minli77@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Jianying, E-mail: Jianying.Li@med.ge.com [CT Imaging Research Center, GE Healthcare China, Beijing (China); Mao, Dingbiao, E-mail: maodingbiao74@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Wang, Zhenglei, E-mail: Williswang_doc@yahoo.com.cn [Department of Radiology, Shanghai Electricity Hospital, Shanghai 200050 (China); Hua, Yanqing, E-mail: cjr.huayanqing@vip.163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China)

    2012-10-15

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI{sub vol}) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique.

  16. Adapt

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  17. Proceedings of PEIA Forum 2007 : adapting and applying California's greenhouse gas strategies in Canada

    The key challenge in addressing climate change lies in identifying and implementing cost-effective measures to reduce greenhouse gas (GHG) emissions. The purpose of this forum was to stimulate action for reducing GHGs in British Columbia, the western provinces and Canada. The successes realized in California which are adaptable in BC and Canada were highlighted. In September 2006, California demonstrated leadership in taking determined action on climate change, with its signing of the California Global Warming Solutions Act. This landmark legislation calls for GHG reductions to 1990 levels by 2020, and 80 per cent below 1990 levels by 2050. The BC Energy plan calls for an aggressive target to reduce GHG emissions to 33 per cent below current levels by 2020, which will place emissions 10 per cent below 1990 levels; net zero GHG emissions from all electric power plants by 2016; acquiring 50 per cent of BC Hydro's new resource needs through conservation by 2020; ensuring electricity self-sufficiency by 2016; and, establishing a standing offer for clean electricity projects up to 10 megawatts. In May 2007, the province of British Columbia demonstrated a commitment to follow California's lead in GHG control, and to collaborate on projects such as the Hydrogen Highway. The actions are intended to make a significant contribution to the control of energy and greenhouse gas emissions in British Columbia and Canada. The conference featured 6 presentations, of which 2 have been catalogued separately for inclusion in this database. tabs., figs

  18. Optical Cluster-Finding with an Adaptive Matched-Filter Technique: Algorithm and Comparison with Simulations

    Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.

    2007-10-29

    We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.

  19. Digital lock-in techniques for adaptive power-line interference extraction.

    Dobrev, Dobromir; Neycheva, Tatyana; Mudrov, Nikolay

    2008-07-01

    This paper presents a simple digital approach for adaptive power-line (PL) or other periodic interference extraction. By means of two digital square (or sine) wave mixers, the real and imaginary parts of the interference are found, and the interference waveform is synthesized and finally subtracted. The described technique can be implemented in an open-loop architecture where the interference is synthesized as a complex sinusoid or in a closed-loop architecture for automatic phase and gain control. The same approach can be used for removal of the fundamental frequency of the PL interference as well as its higher harmonics. It is suitable for real-time operation with popular low-cost microcontrollers. PMID:18560061

  20. Digraph description of k-interchange technique for optimization over permutations and adaptive algorithm system

    Levin, Mark Sh

    2011-01-01

    The paper describes a general glance to the use of element exchange techniques for optimization over permutations. A multi-level description of problems is proposed which is a fundamental to understand nature and complexity of optimization problems over permutations (e.g., ordering, scheduling, traveling salesman problem). The description is based on permutation neighborhoods of several kinds (e.g., by improvement of an objective function). Our proposed operational digraph and its kinds can be considered as a way to understand convexity and polynomial solvability for combinatorial optimization problems over permutations. Issues of an analysis of problems and a design of hierarchical heuristics are discussed. The discussion leads to a multi-level adaptive algorithm system which analyzes an individual problem and selects/designs a solving strategy (trajectory).

  1. Utilizing a Magnetic Abrasive Finishing Technique (MAF Via Adaptive Nero Fuzzy(ANFIS

    Amer A. Moosa

    2015-07-01

    Full Text Available An experimental study was conducted for measuring the quality of surface finishing roughness using magnetic abrasive finishing technique (MAF on brass plate which is very difficult to be polish by a conventional machining process where the cost is high and much more susceptible to surface damage as compared to other materials. Four operation parameters were studied, the gap between the work piece and the electromagnetic inductor, the current that generate the flux, the rotational Spindale speed and amount of abrasive powder size considering constant linear feed movement between machine head and workpiece. Adaptive Neuro fuzzy inference system (ANFIS was implemented for evaluation of a series of experiments and a verification with respect to specimen roughness change has been optimized and usefully achieved by obtained results were an average of the error between the surface roughness predicted by model simulation and that of direct measure is 2.0222 %.

  2. A Novel Implementation of RISI Controller Employing Adaptive Clock Gating Technique

    M.Kamaraju

    2011-11-01

    Full Text Available With the scaling of technology and the need for higher performance and more functionality power dissipation is becoming a major issue for controller design. Interrupt based programming is widely used for interfacing a processor with peripherals. The proposed architecture implements a mechanism which combines interrupt controller and RIS (Reduced Instruction Set CPU (Central processing unit on a single die. RISI Controller takes only one cycle for both interrupt request generation and acknowledgement. The architecture have a dynamic control unit which consists of a program flow controller, interrupt controller and I/O controller. Adaptive clock gating technique is used to reduce power consumption in the dynamic control unit. The controller consumes a power of 174µw@1MHz and is implemented in verilog HDL using Xilinx platform

  3. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    Lichtenberg, Tim

    2015-01-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. Numerical approaches provide a method to tackle the weaknesses of current planet formation models and are an important tool to close gaps in poorly constrained areas. We present a global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code ENZO. With this setup, we explore the impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison, and study the effects of varying the disk radius. Adopting a marginally stable disk profile (Q_init=1), we validate the...

  4. The application and evaluation of adaptive hypermedia techniques in Web-based medical education

    Muan Hong Ng

    2002-12-01

    Full Text Available This article discusses the design issues involved in delivering Web-based learning materials. An existing application in the medical domain - JointZone - is used to illustrate how personalization and an interactive environment can be incorporated into Web-based learning. This work applies the combination of an adaptive hypermedia, situated-learning approach and hypermedia linking concepts to facilitate online learning. A usability study was carried out on the work described and an evaluation was undertaken to measure the effect of personalization on various learning factors. The evaluation outcome was analysed subjectively and objectively. The results proved to be contradictory but, nevertheless, the work gives new insights into the use of technology to support learning

  5. Online Fault Identification Based on an Adaptive Observer for Modular Multilevel Converters Applied to Wind Power Generation Systems

    Liu, Hui; Ma, Ke; Loh, Poh Chiang;

    2015-01-01

    detected by analyzing the difference among the three output load currents, while the localization of the faulty switches is achieved by comparing the estimation results by the adaptive observer. In contrast to other methods that use additional sensors or devices, the presented technique uses the measured...... phase currents only, which are already available for MMC control. In additional, its operation, effectiveness and robustness are confirmed by simulation results under different operating conditions and load conditions....... and post-fault maintenance. Therefore, in this paper, an effective fault diagnosis technique for real-time diagnosis of the switching device faults covering both the open-circuit faults and the short-circuit faults in MMC sub-modules is proposed, in which the faulty phase and the fault type is...

  6. Applying adaptive management in resource use in South African National Parks: A case study approach

    Kelly Scheepers

    2011-05-01

    Full Text Available South African National Parks (SANParks has a history of formal and informal natural resource use that is characterised by polarised views on national conservation interests and benefits to communities. Current efforts aim to determine the sustainability of existing resource use in parks and to formalise these activities through the development of resource use protocols. The resource use policy of SANParks outlines principles for sustainable resource use, including greater involvement of local communities in management of protected areas and an adaptive management approach to determining sustainable use levels. This paper examines three case studies on plant use in national parks with regard to the development of criteria and indicators for monitoring resource use, and the role of thresholds of potential concern in measuring effectiveness of managing for sustainable use levels. Opportunities and challenges for resource use management are identified. Findings show that platforms for discussion and knowledge sharing, including research committees and community associations, are critical to building relationships, trust and a shared vision of sustainable resource use between stakeholders. However, additional capacity building is needed to enable local community structures to manage internal social conflicts and jealousy, and to participate fully in monitoring efforts. Long-term monitoring is essential for developing flexible harvest prescriptions for plant use, but this is a time-consuming and resource-intensive exercise. Flexible management strategies are difficult to implement and sometimes command-and-control measures are necessary to protect rare or endangered species. A holistic approach that considers resource use in national parks as a complement to broader community development initiatives offers a way forward.Conservation implications: There is no blueprint for the development of sustainable resource use systems and resource use is often

  7. Evaluation of internal adaptation of Class V resin composite restorations using three techniques of polymerization

    José Carlos Pereira

    2007-02-01

    Full Text Available OBJECTIVE: The purpose of this in vitro study was to evaluate the internal adaptation of Class V composite restorations to the cavity walls using three different techniques of polymerization. METHODS: Standard cavities were prepared on the buccal and lingual surfaces of 24 extracted human third molars with margins located above and below the cementoenamel junction. Restorations were placed in one increment using two restorative systems: 3M Filtek A110/ Single Bond (M and 3M Filtek Z250/ Single Bond (H in the same tooth, randomly in the buccal and lingual surfaces. Resin composites were polymerized using three techniques: Group 1 - Conventional (60 s - 600 mW/cm²; Group 2 - Soft-start (20 s - 200 mW/cm² , 40 s - 600 mW/cm²; Group 3 - Pulse Activation (3 s - 200 mW/cm², 3-min hiatus, 57 s - 600 mW/cm². Buccolingual sections were polished, impressions taken and replicated. Specimens were assessed under scanning electron microscopy up to X1000 magnification. Scores were given for presence or absence of gaps (0 - no gap; 1 - gap in one wall; 2 - gap in two walls; 3 - gap in three walls. RESULTS: The mean scores of the groups were (±SD were: G1M-3.0 (± 0.0; G2M-2.43 (± 0.8; G3M- 1.71 (± 0.9; G1H- 2.14 (± 1.2; G2H- 2.00 (± 0.8; G3H- 1.67 (± 1.1. Data were analyzed using Kruskal-Wallis and Dunnet's tests. No statistically significant difference (p>0.05 was found among groups. Gaps were observed in all groups. CONCLUSIONS: The photocuring technique and the type of resin composite had no influence on the internal adaptation of the material to the cavity walls. A positive effect was observed when the slow polymerization techniques were used.

  8. Social Science at the Center for Adaptive Optics: Synergistic Systems of Program Evaluation, Applied Research, Educational Assessment, and Pedagogy

    Goza, B. K.; Hunter, L.; Shaw, J. M.; Metevier, A. J.; Raschke, L.; Espinoza, E.; Geaney, E. R.; Reyes, G.; Rothman, D. L.

    2010-12-01

    This paper describes the interaction of four elements of social science as they have evolved in concert with the Center for Adaptive Optics Professional Development Program (CfAO PDP). We hope these examples persuade early-career scientists and engineers to include social science activities as they develop grant proposals and carry out their research. To frame our discussion we use a metaphor from astronomy. At the University of California Santa Cruz (UCSC), the CfAO PDP and the Educational Partnership Center (EPC) are two young stars in the process of forming a solar system. Together, they are surrounded by a disk of gas and dust made up of program evaluation, applied research, educational assessment, and pedagogy. An idea from the 2001 PDP intensive workshops program evaluation developed into the Assessing Scientific Inquiry and Leadership Skills (AScILS) applied research project. In iterative cycles, AScILS researchers participated in subsequent PDP intensive workshops, teaching social science while piloting AScILS measurement strategies. Subsequent "orbits" of the PDP program evaluation gathered ideas from the applied research and pedagogy. The denser regions of this disk of social science are in the process of forming new protoplanets as tools for research and teaching are developed. These tools include problem-solving exercises or simulations of adaptive optics explanations and scientific reasoning; rubrics to evaluate the scientific reasoning simulation responses, knowledge regarding inclusive science education, and student explanations of science/engineering inquiry investigations; and a scientific reasoning curriculum. Another applied research project is forming with the design of a study regarding how to assess engineering explanations. To illustrate the mutual shaping of the cross-disciplinary, intergenerational group of educational researchers and their projects, the paper ends with a description of the professional trajectories of some of the

  9. Adaptive one-dimensional dimming technique for liquid crystal displays with low power consumption and high image quality

    Kim, Seung-Ryeol; Lee, Seung-Woo

    2015-07-01

    An adaptive one-dimensional (1-D) dimming technique for liquid crystal displays that compensates for nonuniform backlight distribution is proposed. Dimming techniques that do not consider luminance distribution may cause severe visual artifacts, such as a block artifact. However, an adaptive 1-D dimming technique that considers luminance distribution can reduce power consumption without causing any visual artifacts. Hardware implementation results verified that our method achieved lower power consumption compared to nondimming techniques and removed block artifacts from International Electrotechnical Commission 62087 standard images. The power consumption using the proposed method ranged from 85.5% to 94.7% compared to nondimming techniques. Furthermore, the contrast ratio increased by up to 231% and 165% on average compared to nondimming techniques.

  10. Sequential Adaptive RBF-Fuzzy Variable Structure Control Applied to Robotics Systems

    Mohammed Salem

    2014-08-01

    Full Text Available In this paper, we present a combination of sequential trained radial basis function networks and fuzzy techniques to enhance the variable structure controllers dedicated to robotics systems. In this aim, four RBFs networks were used to estimate the model based part parameters (Inertia, Centrifugal and Coriolis, Gravity and Friction matrices of a variable structure controller so to respond to model variation and disturbances, a sequential online training algorithm based on Growing-Pruning "GAP" strategy and Kalman filter was implemented. To eliminate the chattering effect, the corrective control of the VS control was computed by a fuzzy controller. Simulations are carried out to control three degrees of freedom SCARA robot manipulator where the obtained results show good disturbance rejection and chattering elimination.

  11. RECENT ADAPTATIONS OF THE LEGAL STANDARDS APPLIED TO TAX LIABILITIES THROUGH GOVERNMENT ORDINANCES

    Ionel\tBOSTAN

    2015-06-01

    Full Text Available Our endeavour is directed at revealing certain difficulties identified during the actual process of levying the government revenue that have “lasted” in time, as well as the methods used in solving or merely alleviating such difficulties, as imposed and applied by the Executive authority. Among the above mentioned issues, we will specifically refer to the measures taken to discourage tax payers from using arrears as a source to fund their own activities, with important mentions on the special correction (“undeclared tax penalty” for cases when certain sums payable to the public budget are not declared (either totally or partially. The 2 part approaches the setting up of the ancillary obligations system which is specifically directed at protecting the real value of the fiscal claims and at sanctioning defaults of payment upon the due date.

  12. Modern structure of methods and techniques of marketing research, applied by the world and Ukrainian research companies

    Bezkrovnaya Yulia

    2015-08-01

    Full Text Available The article presents the results of empiric justification of the structure of methods and techniques of marketing research of consumer decisions, applied by the world and Ukrainian research companies.

  13. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection. PMID:26811073

  14. Adaptive noise cancelling and time-frequency techniques for rail surface defect detection

    Liang, B.; Iwnicki, S.; Ball, A.; Young, A. E.

    2015-03-01

    Adaptive noise cancelling (ANC) is a technique which is very effective to remove additive noises from the contaminated signals. It has been widely used in the fields of telecommunication, radar and sonar signal processing. However it was seldom used for the surveillance and diagnosis of mechanical systems before late of 1990s. As a promising technique it has gradually been exploited for the purpose of condition monitoring and fault diagnosis. Time-frequency analysis is another useful tool for condition monitoring and fault diagnosis purpose as time-frequency analysis can keep both time and frequency information simultaneously. This paper presents an ANC and time-frequency application for railway wheel flat and rail surface defect detection. The experimental results from a scaled roller test rig show that this approach can significantly reduce unwanted interferences and extract the weak signals from strong background noises. The combination of ANC and time-frequency analysis may provide us one of useful tools for condition monitoring and fault diagnosis of railway vehicles.

  15. The Study of Mining Activities and their Influences in the Almaden Region Applying Remote Sensing Techniques

    This scientific-technical report is a part of an ongoing research work carried out by Celia Rico Fraile in order to obtain the Diploma of Advanced Studies as part of her PhD studies. This work has been developed in collaboration with the Faculty of Science at The Universidad Autonoma de Madrid and the Department of Environment at CIEMAT. The main objective of this work was the characterization and classification of land use in Almaden (Ciudad Real) during cinnabar mineral exploitation and after mining activities ceased in 2002, developing a methodology focused on the integration of remote sensing techniques applying multispectral and hyper spectral satellite data. By means of preprocessing and processing of data from the satellite images as well as data obtained from field campaigns, a spectral library was compiled in order to obtain representative land surfaces within the study area. Monitoring results show that the distribution of areas affected by mining activities is rapidly diminishing in recent years. (Author) 130 refs

  16. Unsteady vortex lattice techniques applied to wake formation and performance of the statically thrusting propeller

    Hall, G. F.

    1975-01-01

    The application is considered of vortex lattice techniques to the problem of describing the aerodynamics and performance of statically thrusting propellers. A numerical lifting surface theory to predict the aerodynamic forces and power is performed. The chordwise and spanwise loading is modelled by bound vortices fixed to a twisted flat plate surface. In order to eliminate any apriori assumptions regarding the wake shape, it is assumed the propeller starts from rest. The wake is generated in time and allowed to deform under its own self-induced velocity field as the motion of the propeller progresses. The bound circulation distribution is then determined with time by applying the flow tangency boundary condition at certain selected control points on the blades. The aerodynamics of the infinite wing and finite wing are also considered. The details of wake formation and roll-up are investigated, particularly the localized induction effect. It is concluded that proper wake roll-up and roll-up rates can be established by considering the details of motion at the instant of start.

  17. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  18. Applying Adaptive Agricultural Management & Industrial Ecology Principles to Produce Lower- Carbon Ethanol from California Energy Beets

    Alexiades, Anthy Maria

    The life cycle assessment of a proposed beet-to-ethanol pathway demonstrates how agricultural management and industrial ecology principles can be applied to reduce greenhouse gas emissions, minimize agrochemical inputs and waste, provide ecosystem services and yield a lower-carbon fuel from a highly land-use efficient, first-generation feedstock cultivated in California. Beets grown in California have unique potential as a biofuel feedstock. A mature agricultural product with well-developed supply chains, beet-sugar production in California has contracted over recent decades, leaving idle production capacity and forcing growers to seek other crops for use in rotation or find a new market for beets. California's Low Carbon Fuel Standard (LCFS) faces risk of steeply-rising compliance costs, as greenhouse gas reduction targets in the transportation sector were established assuming commercial volumes of lower-carbon fuels from second-generation feedstocks -- such as residues, waste, algae and cellulosic crops -- would be available by 2020. The expected shortfall of cellulosic ethanol has created an immediate need to develop lower-carbon fuels from readily available feedstocks using conventional conversion technologies. The life cycle carbon intensity of this ethanol pathway is less than 28 gCO2e/MJEthanol: a 72% reduction compared to gasoline and 19% lower than the most efficient corn ethanol pathway (34 gCO2e/MJ not including indirect land use change) approved under LCFS. The system relies primarily on waste-to-energy resources; nearly 18 gCO2e/MJ are avoided by using renewable heat and power generated from anaerobic digestion of fermentation stillage and gasification of orchard residues to meet 88% of the facility's steam demand. Co-products displace 2 gCO2e/MJ. Beet cultivation is the largest source of emissions, contributing 15 gCO 2e/MJ. The goal of the study is to explore opportunities to minimize carbon intensity of beet-ethanol and investigate the potential

  19. Appraisal of adaptive neuro-fuzzy computing technique for estimating anti-obesity properties of a medicinal plant.

    Kazemipoor, Mahnaz; Hajifaraji, Majid; Radzi, Che Wan Jasimah Bt Wan Mohamed; Shamshirband, Shahaboddin; Petković, Dalibor; Mat Kiah, Miss Laiha

    2015-01-01

    This research examines the precision of an adaptive neuro-fuzzy computing technique in estimating the anti-obesity property of a potent medicinal plant in a clinical dietary intervention. Even though a number of mathematical functions such as SPSS analysis have been proposed for modeling the anti-obesity properties estimation in terms of reduction in body mass index (BMI), body fat percentage, and body weight loss, there are still disadvantages of the models like very demanding in terms of calculation time. Since it is a very crucial problem, in this paper a process was constructed which simulates the anti-obesity activities of caraway (Carum carvi) a traditional medicine on obese women with adaptive neuro-fuzzy inference (ANFIS) method. The ANFIS results are compared with the support vector regression (SVR) results using root-mean-square error (RMSE) and coefficient of determination (R(2)). The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the ANFIS approach. The following statistical characteristics are obtained for BMI loss estimation: RMSE=0.032118 and R(2)=0.9964 in ANFIS testing and RMSE=0.47287 and R(2)=0.361 in SVR testing. For fat loss estimation: RMSE=0.23787 and R(2)=0.8599 in ANFIS testing and RMSE=0.32822 and R(2)=0.7814 in SVR testing. For weight loss estimation: RMSE=0.00000035601 and R(2)=1 in ANFIS testing and RMSE=0.17192 and R(2)=0.6607 in SVR testing. Because of that, it can be applied for practical purposes. PMID:25453384

  20. Complementary testing techniques applied to obtain the freeze-thaw resistance of concrete

    Romero, H. L.

    2015-03-01

    Full Text Available Most of the standards that evaluate the resistance of concrete against freeze-thaw cycles (FTC are based on the loss of weight due to scaling. Such procedures are useful but do not provide information about the microstructural deterioration of the concrete. The test procedure needs to be stopped after several FTCs for weighing the loss of material by scaling. This paper proposes the use of mercury-intrusion-porosimetry and thermogravimetric analysis for assessing the microstructural damage of concrete during FTCs. Continuous strain measurement can be performed without stopping the FTCs. The combination of the above techniques with the freeze-thaw resistance standards provides better and more precise information about concrete damage. The proposed procedure is applied to an ordinary concrete, a concrete with silica fume addition and one with an air-entraining agent. The test results showed that the three techniques used are suitable and useful to be employed as complementary to the standards.Las normas para evaluar la resistencia del hormigón a los ciclos hielo-deshielo (CHD se basan habitualmente en la pérdida de peso por descascarillamiento. Son útiles, pero no proporcionan información sobre el deterioro microestructural del hormigón. Además, exigen detener el ensayo para pesar el material desprendido. Se propone el uso complementario de la porosimetría por intrusión de mercurio y el análisis termogravimétrico para evaluar el daño microestructural del hormigón durante los CHDs. La medida continua de las deformaciones puede hacerse sin detener los CHDs. La combinación de las técnicas enumeradas con las normas de ensayo proporciona información más completa sobre el daño del hormigón. El procedimiento propuesto se aplica a un hormigón convencional, a un hormigón con adición de humo de sílice y a otro con aireante. Los resultados de los ensayos mostraron que las tres técnicas usadas son útiles y adecuadas como complemento a

  1. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids.

    Jhin, Changho; Hwang, Keum Taek

    2015-01-01

    One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models. PMID:26474167

  2. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids.

    Changho Jhin

    Full Text Available One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS applied quantitative structure-activity relationship models (QSAR were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models.

  3. Adaption of the temporal correlation coefficient calculation for temporal networks (applied to a real-world pig trade network).

    Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim

    2016-01-01

    The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods. PMID:27026862

  4. A Novel Adaptive Elite-Based Particle Swarm Optimization Applied to VAR Optimization in Electric Power Systems

    Ying-Yi Hong

    2014-01-01

    Full Text Available Particle swarm optimization (PSO has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1 appending the mean search to the original approach and (2 pruning/cloning particles. The mean search, leading to stable convergence, helps the iterative process coordinate between the global and local searches. The mean of the particles and standard deviation of the distances between pairs of particles are utilized to prune distant particles. The best particle is cloned and it replaces the pruned distant particles in the elite strategy. To evaluate the performance and generality of the proposed method, four benchmark functions were tested by traditional PSO, chaotic PSO, differential evolution, and genetic algorithm. Finally, a realistic loss minimization problem in an electric power system is studied to show the robustness of the proposed method.

  5. Study for applying microwave power saturation technique on fingernail/EPR dosimetry

    Park, Byeong Ryong; Choi, Hoon; Nam, Hyun Ill; Lee, Byung Ill [Radiation Health Research Institute, Seoul (Korea, Republic of)

    2012-10-15

    There is growing recognition worldwide of the need to develop effective uses of dosimetry methods to assess unexpected exposure to radiation in the event of a large scale event. One of physically based dosimetry methods electron paramagnetic resonance (EPR) spectroscopy has been applied to perform retrospective radiation dosimetry using extracted samples of tooth enamel and nail(fingernail and toenail), following radiation accidents and exposures resulting from weapon use, testing, and production. Human fingernails are composed largely of a keratin, which consists of {alpha} helical peptide chains that are twisted into a left handed coil and strengthened by disulphide cross links. Ionizing radiation generates free radicals in the keratin matrix, and these radicals are stable over a relatively long period (days to weeks). Most importantly, the number of radicals is proportional to the magnitude of the dose over a wide dose range (0{approx}30 Gy). Also, dose can be estimated at four different locations on the human body, providing information on the homogeneity of the radiation exposure. And The results from EPR nail dosimetry are immediately available However, relatively large background signal (BKS) converted from mechanically induced signal (MIS) after cutting process of fingernail, normally overlaps with the radiation induced signal (RIS), make it difficult to estimate accurate dose accidental exposure. Therefore, estimation method using dose response curve was difficult to ensure reliability below 5 Gy. In this study, In order to overcome these disadvantages, we measured the reactions of RIS and BKS (MIS) according to the change of Microwave power level, and researched about the applicability of the Power saturation technique at low dose.

  6. An Adaptive Single-Well Stochastic Resonance Algorithm Applied to Trace Analysis of Clenbuterol in Human Urine

    Shaofei Xie

    2012-02-01

    Full Text Available Based on the theory of stochastic resonance, an adaptive single-well stochastic resonance (ASSR coupled with genetic algorithm was developed to enhance the signal-to-noise ratio of weak chromatographic signals. In conventional stochastic resonance algorithm, there are two or more parameters needed to be optimized and the proper parameters values were obtained by a universal searching within a given range. In the developed ASSR, the optimization of system parameter was simplified and automatic implemented. The ASSR was applied to the trace analysis of clenbuterol in human urine and it helped to significantly improve the limit of detection and limit of quantification of clenbuterol. Good linearity, precision and accuracy of the proposed method ensure that it could be an effective tool for trace analysis and the improvement of detective sensibility of current detectors.

  7. Plasma-based techniques applied to the determination of metals and metalloids in atmospheric aerosols

    Smichowski, Patricia, E-mail: smichows@cnea.gov.ar [Comision Nacional de Energia Atomica, Gerencia Quimica, Pcia de Buenos Aires (Argentina)

    2011-07-01

    Full text: This lecture presents an overview of the research carried out by our group during the last decade on the determination of metals, metalloids, ions and species in atmospheric aerosols and related matrices using plasma-based techniques. In our first studies we explored the application of a size fractionation procedure and the subsequent determination of minor, major and trace elements in samples of deposited particles collected one day after the eruption of the Copahue Volcano, located in the Chile-Argentina border to assess the content of relevant elements with respect of the environment and the local population health. We employed a multi-technique approach (ICP-MS, XRD and NAA) to gain complete information of the characteristics of the sample. In addition to the study of ashes emitted for natural sources we also studied ashes of anthropogenic origin such as those arising from coal combustion in thermal power plants. For estimating the behavior and fate of elements in atmospheric particles and ashes we applied in this case a chemical fractionation procedure in order to establish the distribution of many elements amongst soluble, bound to carbonates, bound to oxides and bound to organic matter and environmental immobile fraction. Studies on the air quality of the mega-city of Buenos Aires were scarce and fragmentary and our objective was, and still is, to contribute to clarify key issues related to levels of crustal, toxic and potentially toxic elements in this air basin. Our findings were compared with average concentrations of metals and metalloids with results reported for other Latin American cities such as Sao Paulo, Mexico and Santiago de Chile. In this context, a series of studies were carried out since 2004 considering different sampling strategies to reflect local aspects of air pollution sources. In the last years, our interest was focused on the levels of traffic-related elements in the urban atmosphere. We have contributed with the first data

  8. Parametric Characterization of Porous 3D Bioscaffolds Fabricated by an Adaptive Foam Reticulation Technique

    Winnett, James; Mallick, Kajal K.

    2014-04-01

    Commercially pure titanium (Ti) and its alloys, in particular, titanium-vanadium-aluminium (Ti-6Al-4V), have been used as biomaterials due to their mechanical similarities to bone, good biocompatibility, and inertness in vivo. The introduction of porosity to the scaffolds leads to optimized mechanical properties and enhanced biological activity. The adaptive foam reticulation (AFR) technique has been previously used to generate hydroxyapatite bioscaffolds with enhanced cell behavior due to the generation of macroporous structures with microporous struts that provided routes for cell infiltration as well as attachment sites. Sacrificial polyurethane templates of 45 ppi and 90 ppi were coated in biomaterial-based slurries containing either Ti or Ti-6Al-4V as the biomaterial and camphene as the porogen. The resultant macropore sizes of 100-550 μm corresponded well with the initial template pore sizes while camphene produced micropores of 1-10 μm, with the level of microporosity related to the amount of porogen inclusion.

  9. Evaluation of Turbulence Models in Predicting Hypersonic and Subsonic Base Flows Using Grid Adaptation Techniques

    YOU Yancheng; BUANGA Bj(o)rn; HANNEMANN Volker; L(U)DEKE Heinrich

    2012-01-01

    The flows behind the base of a generic rocket,at both hypersonic and subsonic flow conditions,are numerically studied.The main concerns are addressed to the evaluation of turbulence models and the using of grid adaptation techniques.The investigation focuses on two configurations,related to hypersonic and subsonic experiments.The applicability tests of different turbulence models are conducted on the level of two-equation models calculating the steady state solution of the Reynolds-averaged Navier-Stokes(RANS) equations.All used models,the original Wilcox k-ω,the Menter shear-stress transport (SST) and the explicit algebraic Reynolds stress model(EARSM) formulation,predict an asymmetric base flow in both cases caused by the support of the models.A comparison with preliminary experimental results indicates a preference for the SST and EARSM results over the results from the older k-ω model.Sensitivity studies show no significant influence of the grid topology or the location of the laminar to turbulent transition on the base flow field,but a strong influence of even small angles of attack is reported from the related experiments.

  10. Time-series-analysis techniques applied to nuclear-material accounting

    This document is designed to introduce the reader to the applications of Time Series Analysis techniques to Nuclear Material Accountability data. Time series analysis techniques are designed to extract information from a collection of random variables ordered by time by seeking to identify any trends, patterns, or other structure in the series. Since nuclear material accountability data is a time series, one can extract more information using time series analysis techniques than by using other statistical techniques. Specifically, the objective of this document is to examine the applicability of time series analysis techniques to enhance loss detection of special nuclear materials. An introductory section examines the current industry approach which utilizes inventory differences. The error structure of inventory differences is presented. Time series analysis techniques discussed include the Shewhart Control Chart, the Cumulative Summation of Inventory Differences Statistics (CUSUM) and the Kalman Filter and Linear Smoother

  11. Automatic online adaptive radiation therapy techniques for targets with significant shape change: a feasibility study

    Court, Laurence E; Tishler, Roy B; Petit, Joshua; Cormack, Robert; Chin Lee [Department of Radiation Oncology, Dana-Farber/Brigham and Women' s Hospital Cancer Center, 75 Francis Street, ASBI-L2, Boston, MA 02115 (United States)

    2006-05-21

    This work looks at the feasibility of an online adaptive radiation therapy concept that would detect the daily position and shape of the patient, and would then correct the daily treatment to account for any changes compared with planning position. In particular, it looks at the possibility of developing algorithms to correct for large complicated shape change. For co-planar beams, the dose in an axial plane is approximately associated with the positions of a single multi-leaf collimator (MLC) pair. We start with a primary plan, and automatically generate several secondary plans with gantry angles offset by regular increments. MLC sequences for each plan are calculated keeping monitor units (MUs) and number of segments constant for a given beam (fluences are different). Bulk registration (3D) of planning and daily CT images gives global shifts. Slice-by-slice (2D) registration gives local shifts and rotations about the longitudinal axis for each axial slice. The daily MLC sequence is then created for each axial slice/MLC leaf pair combination, by taking the MLC positions from the pre-calculated plan with the nearest rotation, and shifting using a beam's-eye-view calculation to account for local linear shifts. A planning study was carried out using two head and neck region MR images of a healthy volunteer which were contoured to simulate a base-of-tongue treatment: one with the head straight (used to simulate the planning image) and the other with the head tilted to the left (the daily image). Head and neck treatment was chosen to evaluate this technique because of its challenging nature, with varying internal and external contours, and multiple degrees of freedom. Shape change was significant: on a slice-by-slice basis, local rotations in the daily image varied from 2 to 31 deg, and local shifts ranged from -0.2 to 0.5 cm and -0.4 to 0.0 cm in right-left and posterior-anterior directions, respectively. The adapted treatment gave reasonable target coverage (100

  12. The efficiency of Lutz, Kato-Katz and Baermann-Moraes (adapted techniques association to the diagnosis of intestinal helmints

    Henry Percy Willcox

    1991-12-01

    Full Text Available The association of Lutz/Kato-Katz and Lutz/Bermann-Moraes (adapted techniques was used to improve better results that ranged from 0.4 to 11 times in the search of eggs of Ascaris lumbricoides, Schistosoma mansoni, Trichiuris trichiura, Taenia sp. and larvae of Strongyloides stercoralis.

  13. Aquifer Storage and Recovery as a Viable Climate Change Adaptation Technique: Sustainable Development under the Current Regulatory Framework

    A holistic investigation of aquifer storage and recovery (ASR) technique and application in the U.S. is being conducted as a part of the USEPA Water Resources Adaptation Program (WRAP). The research focus is to evaluate the potential of ASR application as a practical climate chan...

  14. PWM Technique of Five-Leg Inverter Applying Two-Arm Modulation

    Oka, Kazuo; Matsuse, Kouki

    This paper presents the simple pulse width modulation (PWM) technique of a five-leg inverter to drive two three-phase AC motors independently. Normal PWM techniques in three-phase voltage source inverter cannot be used for the five-leg inverter with the independent drive. In this paper, two kinds of simple and novel PWM techniques of the five-leg inverter with two three-phase AC motors are introduced. Experimental results are provided to illustrate the validity of the proposed PWM technique.

  15. ADVANCES OF BASIC MOLECULAR BIOLOGY TECHNIQUES: POTENTIAL TO APPLY IN PLANT VIROID DETECTION IN SRI LANKA

    Yapa M.A.M. Wijerathna

    2012-12-01

    Full Text Available Viroids are the smallest pathogens of plants. They are the cause of serious diseases on economic plants worldwide. Prevention and detection of the pathogens are the best method to reduce the economic loss from viroid infection. During last decade, genetics and molecular biology techniques have gained an increasing presence in plant pathology research. The purpose of this review is to highlight the most upgrade molecular biology techniques that have been used and studied recently. Most relevant published reports and hand skilled techniques have presented here with emphasis on suitable Viroid detection technique should be used for Sri Lanka.

  16. Overview of Post-Irradiation Examination Techniques Applied at PSI for Light Water Reactor Fuel Characterization

    Within long term cooperation agreements, the PSI Laboratory for Material Behaviour (LWV) in the Department for Nuclear Energy and Safety (NES) has characterized pathfinder PWR- and BWR fuel (meaning fuel and cladding) of the Swiss nuclear power stations Goesgen (KKG) and Leibstadt (KKL) in many PIE sequences. Based on these pathfinder fuel pin and lead assembly tests the fuels in these reactors has been continuously improved over the years and reaches today a batch burnup which is nearly twice as high as at the beginning of 1980. Additionally to providing scientific analytical services with respect to accurate engineering fuel and cladding PIE data, PSI itself as a research organization undertakes basic and applied scientific research. We focus in this environment in elucidating the cladding corrosion and hydriding mechanisms, the cladding mechanical aging processes and the fission gas diffusion process in the fuel. The PIE tools available at PSI consist of non-destructive methods (visual examination, gamma scanning, profilometry and EC-defect and -oxide thickness measurements), puncturing and fission gas analysis, and destructive investigations of cut samples (metallography/ceramography, hydrogen hot gas extraction, electron probe micro-analysis (EPMA), secondary ion mass spectroscopy (SIMS), scanning- and transmission- electron microscopy (SEM and TEM), lately also laser ablation inductively coupled plasma mass spectroscopy (LA-ICPMS), x-ray absorption spectroscopy, and mechanical testing). While commercial high burnup programs often request standard engineering data like cladding oxide thickness values and hydrogen contents or fuel pin fission gas release values, PSI has improved such analytical techniques. Additionally, we have performed independent research in order to improve the fundamental understanding of the irradiation behaviour e.g. by elucidating the corrosion process with detailed characterization of the cladding metal-oxide interface by TEM, by local

  17. Intelligent Adaptation and Personalization Techniques in Computer-Supported Collaborative Learning

    Demetriadis, Stavros; Xhafa, Fatos

    2012-01-01

    Adaptation and personalization have been extensively studied in CSCL research community aiming to design intelligent systems that adaptively support eLearning processes and collaboration. Yet, with the fast development in Internet technologies, especially with the emergence of new data technologies and the mobile technologies, new opportunities and perspectives are opened for advanced adaptive and personalized systems. Adaptation and personalization are posing new research and development challenges to nowadays CSCL systems. In particular, adaptation should be focused in a multi-dimensional way (cognitive, technological, context-aware and personal). Moreover, it should address the particularities of both individual learners and group collaboration. As a consequence, the aim of this book is twofold. On the one hand, it discusses the latest advances and findings in the area of intelligent adaptive and personalized learning systems. On the other hand it analyzes the new implementation perspectives for intelligen...

  18. PIE Results and New Techniques Applied for 55GWd/t High Burnup Fuel of PWR

    Post-irradiation examinations (PIE) for 55GWd/t high burnup fuel which had been irradiated at a domestic PWR plant was conducted at the fuel hot laboratory of the Nuclear Development Corporation (NDC). In this PIE, such new techniques as the clamping for axial tensile test and the pellets density measurement method for high burnup fuels were used in addition to existing techniques to confirm the integrity of 55GWd/t high burnup fuel. The superiority of improved corrosion-resistant claddings over currently used current Zircaloy-4 claddings in terms of corrosion-resistance was also confirmed. This paper describes the PIE results and the advanced PIE techniques. (author)

  19. A multiblock grid generation technique applied to a jet engine configuration

    Stewart, Mark E. M.

    1992-01-01

    Techniques are presented for quickly finding a multiblock grid for a 2D geometrically complex domain from geometrical boundary data. An automated technique for determining a block decomposition of the domain is explained. Techniques for representing this domain decomposition and transforming it are also presented. Further, a linear optimization method may be used to solve the equations which determine grid dimensions within the block decomposition. These algorithms automate many stages in the domain decomposition and grid formation process and limit the need for human intervention and inputs. They are demonstrated for the meridional or throughflow geometry of a bladed jet engine configuration.

  20. Quantitative thoracic CT techniques in adults: can they be applied in the pediatric population?

    Yoon, Soon Ho [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Goo, Jin Mo [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Jongno-gu, Seoul (Korea, Republic of); Goo, Hyun Woo [University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Asan Medical Center, Seoul (Korea, Republic of)

    2013-03-15

    With the rapid evolution of the multidetector row CT technique, quantitative CT has started to be used in clinical studies for revealing a heterogeneous entity of airflow limitation in chronic obstructive pulmonary disease that is caused by a combination of lung parenchymal destruction and remodeling of the small airways in adults. There is growing evidence of a good correlation between quantitative CT findings and pathological findings, pulmonary function test results and other clinical parameters. This article provides an overview of current quantitative thoracic CT techniques used in adults, and how to translate these CT techniques to the pediatric population. (orig.)

  1. Calorimetric techniques applied to the thermodynamic study of interactions between proteins and polysaccharides

    Monique Barreto Santos

    2016-08-01

    Full Text Available ABSTRACT: The interactions between biological macromolecules have been important for biotechnology, but further understanding is needed to maximize the utility of these interactions. Calorimetric techniques provide information regarding these interactions through the thermal energy that is produced or consumed during interactions. Notable techniques include differential scanning calorimetry, which generates a thermodynamic profile from temperature scanning, and isothermal titration calorimetry that provide the thermodynamic parameters directly related to the interaction. This review described how calorimetric techniques can be used to study interactions between proteins and polysaccharides, and provided valuable insight into the thermodynamics of their interaction.

  2. Nde of Advanced Automotive Composite Materials that Apply Ultrasound Infrared Thermography Technique

    Choi, Seung-Hyun; Park, Soo-Keun; Kim, Jae-Yeol

    The infrared thermographic nondestructive inspection technique is a quality inspection and stability assessment method used to diagnose the physical characteristics and defects by detecting the infrared ray radiated from the object without destructing it. Recently, the nondestructive inspection and assessment that use the ultrasound-infrared thermography technique are widely adopted in diverse areas. The ultrasound-infrared thermography technique uses the phenomenon that the ultrasound wave incidence to an object with cracks or defects on its mating surface generates local heat on the surface. The car industry increasingly uses composite materials for their lightweight, strength, and environmental resistance. In this study, the car piston passed through the ultrasound-infrared thermography technique for nondestructive testing, among the composite material car parts. This study also examined the effects of the frequency and power to optimize the nondestructive inspection.

  3. Applying of Reliability Techniques and Expert Systems in Management of Radioactive Accidents

    Accidents including radioactive exposure have variety of nature and size. This makes such accidents complex situations to be handled by radiation protection agencies or any responsible authority. The situations becomes worse with introducing advanced technology with high complexity that provide operator huge information about system working on. This paper discusses the application of reliability techniques in radioactive risk management. Event tree technique from nuclear field is described as well as two other techniques from nonnuclear fields, Hazard and Operability and Quality Function Deployment. The objective is to show the importance and the applicability of these techniques in radiation risk management. Finally, Expert Systems in the field of accidents management are explored and classified upon their applications

  4. State of the Art Review for Applying Computational Intelligence and Machine Learning Techniques to Portfolio Optimisation

    Hurwitz, Evan

    2009-01-01

    Computational techniques have shown much promise in the field of Finance, owing to their ability to extract sense out of dauntingly complex systems. This paper reviews the most promising of these techniques, from traditional computational intelligence methods to their machine learning siblings, with particular view to their application in optimising the management of a portfolio of financial instruments. The current state of the art is assessed, and prospective further work is assessed and recommended

  5. ADVANCES OF BASIC MOLECULAR BIOLOGY TECHNIQUES: POTENTIAL TO APPLY IN PLANT VIROID DETECTION IN SRI LANKA

    Yapa M.A.M. Wijerathna

    2012-01-01

    Viroids are the smallest pathogens of plants. They are the cause of serious diseases on economic plants worldwide. Prevention and detection of the pathogens are the best method to reduce the economic loss from viroid infection. During last decade, genetics and molecular biology techniques have gained an increasing presence in plant pathology research. The purpose of this review is to highlight the most upgrade molecular biology techniques that have been used and studied recently. Most relevan...

  6. Measurement of the magnitude of force applied by students when learning a mobilisation technique

    Smit, E.; Conradie, M; Wessels, J.; I. Witbooi; Otto, R.

    2003-01-01

    Passive accessory intervertebral movements (PAIVM’s) are frequently used by physiotherapists in the  assessment and management of patients. Studies investigating the reliability of passive mobilisation techniques have shown conflicting results. Therefore, standardisation of PAIVM’s is essential for research and teaching purposes, which could result in better clinical management. In order to standardise graded passive mobilisation techniques, a reliable, easy-to-use, objective measurement tool...

  7. Flipped parameter technique applied on source localization in energy constraint sensor arrays

    Pavlović Vlastimir D.; Veličković Zoran S.

    2009-01-01

    In this paper novel flipped parameter technique (FPT) for time delay estimation (TDE) in source localization problem is described. We propose passive source localization technique based on the development of an energy efficient algorithm that can reduce intersensor and interarray communication. We propose a flipped parameter (FP) which can be defined for any sensor in distributed sensor subarrays during the observation period. Unlike classical TDE methods that evaluate cross-correlation funct...

  8. An Innovations-Based Noise Cancelling Technique on Inverse Kepstrum Whitening Filter and Adaptive FIR Filter in Beamforming Structure

    Jinsoo Jeong

    2011-06-01

    Full Text Available This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure.

  9. Non-invasive near-field measurement setup based on modulated scatterer technique applied to microwave tomography

    Memarzadeh-Tehran, Hamidreza

    The main focus of this thesis is to address the design and development of a near-field (NF) imaging setup based on the modulated scatterer technique (MST). MST is a well-known approach used in applications where accurate and perturbation-free measurement results are necessary. Of the possible implementations available for making an MST probe, including electrical, optical and mechanical, the optically modulated scatterer OMS was considered in order to provide nearly perturbation-free measurement due to the invisibility of optical fiber to the radio-frequency electromagnetic fields. The OMS probe consists of a commercial, off-the-shelf (COTS) photodiode chip (nonlinear device), a short-dipole antenna acting as a scatterer and a matching network (passive circuit). The latter improves the scattering properties and also increases the sensitivity of the OMS probe within the frequency range in which the matching network is optimized. The radiation characteristics of the probe, including cross-polarization response and omnidirectional sensitivity, were both theoretically and experimentally investigated. Finally, the performance and reliability of the probe was studied by comparing measured near-field distributions on a known field distribution with simulations. Increased imaging speed was obtained using an array of OMS probes, which reduces mechanical movements. Mutual-coupling, switching time and shadowing effect, which all may affect the performance of the array, were investigated. Then, the results obtained by the array were validated in a NF imager by measuring the E-field distribution of an antenna under test (AUT) and comparing it with a simulation. Calibration and data averaging were applied to raw data to compensate the probes for uncertainties in fabrication and interaction between array/AUT and array/receiving antenna. Dynamic range and linearity of the developed NF imager was improved by adding a carrier canceller circuit to the front-end of the receiver. The

  10. Test techniques: A survey paper on cryogenic tunnels, adaptive wall test sections, and magnetic suspension and balance systems

    Kilgore, Robert A.; Dress, David A.; Wolf, Stephen W. D.; Britcher, Colin P.

    1989-01-01

    The ability to get good experimental data in wind tunnels is often compromised by things seemingly beyond our control. Inadequate Reynolds number, wall interference, and support interference are three of the major problems in wind tunnel testing. Techniques for solving these problems are available. Cryogenic wind tunnels solve the problem of low Reynolds number. Adaptive wall test sections can go a long way toward eliminating wall interference. A magnetic suspension and balance system (MSBS) completely eliminates support interference. Cryogenic tunnels, adaptive wall test sections, and MSBS are surveyed. A brief historical overview is given and the present state of development and application in each area is described.

  11. Performance values for non destructive assay (NDA) techniques applied to safeguards: the 2002 evaluation by the ESARDA NDA Working Group

    The first evaluation of NDA performance values undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques (WGNDA) was published in 1993. Almost 10 years later the Working Group decided to review those values, to report about improvements and to issue new performance values for techniques which were not applied in the early nineties, or were at that time only emerging. Non-Destructive Assay techniques have become more and more important in recent years, and they are used to a large extent in nuclear material accountancy and control both by operators and control authorities. As a consequence, the performance evaluation for NDA techniques is of particular relevance to safeguards authorities in optimising Safeguards operations and reducing costs. Performance values are important also for NMAC regulators, to define detection levels, limits for anomalies, goal quantities and to negotiate basic audit rules. This paper presents the latest evaluation of ESARDA Performance Values (EPVs) for the most common NDA techniques currently used for the assay of nuclear materials for Safeguards purposes. The main topics covered by the document are: techniques for plutonium bearing materials: PuO2 and MOX; techniques for U-bearing materials; techniques for U and Pu in liquid form; techniques for spent fuel assay. This issue of the performance values is the result of specific international round robin exercises, field measurements and ad hoc experiments, evaluated and discussed in the ESARDA NDA Working Group. (author)

  12. Applying data mining techniques to medical time series: an empirical case study in electroencephalography and stabilometry.

    Anguera, A; Barreiro, J M; Lara, J A; Lizcano, D

    2016-01-01

    One of the major challenges in the medical domain today is how to exploit the huge amount of data that this field generates. To do this, approaches are required that are capable of discovering knowledge that is useful for decision making in the medical field. Time series are data types that are common in the medical domain and require specialized analysis techniques and tools, especially if the information of interest to specialists is concentrated within particular time series regions, known as events. This research followed the steps specified by the so-called knowledge discovery in databases (KDD) process to discover knowledge from medical time series derived from stabilometric (396 series) and electroencephalographic (200) patient electronic health records (EHR). The view offered in the paper is based on the experience gathered as part of the VIIP project. Knowledge discovery in medical time series has a number of difficulties and implications that are highlighted by illustrating the application of several techniques that cover the entire KDD process through two case studies. This paper illustrates the application of different knowledge discovery techniques for the purposes of classification within the above domains. The accuracy of this application for the two classes considered in each case is 99.86% and 98.11% for epilepsy diagnosis in the electroencephalography (EEG) domain and 99.4% and 99.1% for early-age sports talent classification in the stabilometry domain. The KDD techniques achieve better results than other traditional neural network-based classification techniques. PMID:27293535

  13. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  14. Flipped parameter technique applied on source localization in energy constraint sensor arrays

    Pavlović Vlastimir D.

    2009-01-01

    Full Text Available In this paper novel flipped parameter technique (FPT for time delay estimation (TDE in source localization problem is described. We propose passive source localization technique based on the development of an energy efficient algorithm that can reduce intersensor and interarray communication. We propose a flipped parameter (FP which can be defined for any sensor in distributed sensor subarrays during the observation period. Unlike classical TDE methods that evaluate cross-correlation function, FPT requires evaluation based upon single sensor signal. The computed cross correlation between a signal and its analytic 'flipped' pair (flipped correlation is a smooth function which peak (time delay can be accurately detected. Flipped parameters are sufficient to determine all differential delays of the signals related to the same source. The flipped parameter technique can be used successfully in two-step methods of passive source localization with significantly less energy in comparison to the classic cross correlation. The use of FPT method is especially significant for the energy constrain distributed sensor subarrays. Using synthetic seismic signals, we illustrate the error of the source localization for classical and proposed method in the presence of noise. We demonstrate the performance improvement in noise environment of the proposed technique in comparison to the classic methods that use real signals. The proposed technique gives accurate results for both coherent and non-coherent signals.

  15. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media

  16. Schlieren technique applied to the arc temperature measurement in a high energy density cutting torch

    Plasma temperature and radial density profiles of the plasma species in a high energy density cutting arc have been obtained by using a quantitative schlieren technique. A Z-type two-mirror schlieren system was used in this research. Due to its great sensibility such technique allows measuring plasma composition and temperature from the arc axis to the surrounding medium by processing the gray-level contrast values of digital schlieren images recorded at the observation plane for a given position of a transverse knife located at the exit focal plane of the system. The technique has provided a good visualization of the plasma flow emerging from the nozzle and its interactions with the surrounding medium and the anode. The obtained temperature values are in good agreement with those values previously obtained by the authors on the same torch using Langmuir probes.

  17. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  18. Energy saving techniques applied over a nation-wide mobile network

    Perez, Eva; Frank, Philipp; Micallef, Gilbert;

    2014-01-01

    networks. Although base station equipment is improving its energy efficiency by means of new power amplifiers and increased processing power, additional techniques are required to further reduce the energy consumption. In this paper, we evaluate different energy saving techniques and study their impact on...... the energy consumption based on a nation-wide network of a leading European operator. By means of an extensive analysis, we show that with the proposed techniques significant energy savings can be realized.......Traffic carried over wireless networks has grown significantly in recent years and actual forecasts show that this trend is expected to continue. However, the rapid mobile data explosion and the need for higher data rates comes at a cost of increased complexity and energy consumption of the mobile...

  19. Applied techniques for high bandwidth data transfers across wide area networks

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference

  20. New Control Technique Applied in Dynamic Voltage Restorer for Voltage Sag Mitigation

    Rosli Omar

    2010-01-01

    Full Text Available The Dynamic Voltage Restorer (DVR was a power electronics device that was able to compensate voltage sags on critical loads dynamically. The DVR consists of VSC, injection transformers, passive filters and energy storage (lead acid battery. By injecting an appropriate voltage, the DVR restores a voltage waveform and ensures constant load voltage. There were so many types of the control techniques being used in DVR for mitigating voltage sags. The efficiency of the DVR depends on the efficiency of the control technique involved in switching the inverter. Problem statement: Simulation and experimental investigation toward new algorithms development based on SVPWM. Understanding the nature of DVR and performance comparisons between the various controller technologies available. The proposed controller using space vector modulation techniques obtain higher amplitude modulation indexes if compared with conventional SPWM techniques. Moreover, space vector modulation techniques can be easily implemented using digital processors. Space vector PWM can produce about 15% higher output voltage than standard Sinusoidal PWM. Approach: The purpose of this research was to study the implementation of SVPWM in DVR. The proposed control algorithm was investigated through computer simulation by using PSCAD/EMTDC software. Results: From simulation and experimental results showed the effectiveness and efficiency of the proposed controller based on SVPWM in mitigating voltage sags in low voltage distribution systems. It was concluded that its controller also works well both in balance and unbalance conditions of voltages. Conclusion/Recommendations: The simulation and experimental results of a DVR using PSCAD/EMTDC software based on SVPWM technique showed clearly the performance of the DVR in mitigating voltage sags. The DVR operates without any difficulties to inject the appropriate voltage component to correct rapidly any anomaly in the supply voltage to keep the

  1. Applied predictive analytics principles and techniques for the professional data analyst

    Abbott, Dean

    2014-01-01

    Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

  2. U P1, an example for advanced techniques applied to high level activity dismantling

    The U P1 plant on the CEA Marcoule site was dedicated to the processing of spend fuels from the G1, G2 and G3 plutonium-producing reactors. This plant represents 20.000 m2 of workshops housing about 1000 hot cells. In 1998, a huge program for the dismantling and cleaning-up of the UP1 plant was launched. CEA has developed new techniques to face the complexity of the dismantling operations. These techniques include immersive virtual reality, laser cutting, a specific manipulator arm called MAESTRO and remote handling. (A.C.)

  3. Innovative vibration technique applied to polyurethane foam as a viable substitute for conventional fatigue testing

    Peralta, Alexander; Just-Agosto, Frederick; Shafiq, Basir; Serrano, David

    2012-12-01

    Lifetime prediction using three-point bending (TPB) can at times be prohibitively time consuming and costly, whereas vibration testing at higher frequency may potentially save time and revenue. A vibration technique that obtains lifetimes that reasonably match those determined under flexural TPB fatigue is developed. The technique designs the specimen with a procedure based on shape optimization and finite element analysis. When the specimen is vibrated in resonance, a stress pattern that mimics the stress pattern observed under conventional TPB fatigue testing is obtained. The proposed approach was verified with polyurethane foam specimens, resulting in an average error of 4.5% when compared with TPB.

  4. Development of Characterization Techniques of Thermodynamic and Physical Properties Applied to the CO2-DMSO Mixture

    Calvignac, Brice; Rodier, Elisabeth; Letourneau, Jean-Jacques; Fages, Jacques

    2009-01-01

    International audience This work is focused on the development of new characterization techniques of physical and thermodynamic properties. These techniques have been validated using the binary system DMSO-CO2 for which several studies of characterization have been well documented. We focused on the DMSO-rich phase and we carried out measurements of volumetric expansion, density, viscosity and CO2 solubility at 298.15, 308.15 and 313.15 K and pressures up to 9 MPa. The experimental procedu...

  5. Review of Intelligent Techniques Applied for Classification and Preprocessing of Medical Image Data

    H S Hota; Shukla, S.P.; Kajal Gulhare

    2013-01-01

    Medical image data like ECG, EEG and MRI, CT-scan images are the most important way to diagnose disease of human being in precise way and widely used by the physician. Problem can be clearly identified with the help of these medical images. A robust model can classify the medical image data in better way .In this paper intelligent techniques like neural network and fuzzy logic techniques are explored for MRI medical image data to identify tumor in human brain. Also need of preprocessing of me...

  6. An Adaptive Clutter Suppression Technique for Moving Target Detector in Pulse Doppler Radar

    A. Mandal

    2014-04-01

    Full Text Available An adaptive system performs the processing by using an architecture having time-varying parameters on the received signals which accompanies with clutters. In this paper, an adaptive moving target detector has been designed to meet the challenges of target detection amidst various levels of clutter environments. The approach has been used that is able to overcome the inherent limitations of conventional systems (e.g. Moving Target Indicator, Fast Fourier Transform etc. having predefined coefficients. In this purpose an optimal design of transversal filter is being proposed along with various weight selection Maps to improve probability of detection in ground based surveillance radar. A modified LMS algorithm based adaptive FIR filter has been implemented utilizing modular CORDIC unit as a main processing element for filtering as well as weight updatation to suppress clutter of various intensity. Extensive MATLAB simulations have been done using various levels of clutter input to show the effectiveness of adaptive moving target detector (AMTD.

  7. ADAPTING E-COURSES USING DATA MINING TECHNIQUES - PDCA APPROACH AND QUALITY SPIRAL

    Marija Blagojevic; Zivadin Micic

    2013-01-01

    This paper presents an approach to adapting e-courses based on original PDCA (Plan, Do, Check , Act) platform and quality spiral. An algorithm for the adaptation of e-courses was proposed and implemented into the Moodle Learning Management System at the Faculty of Technical Sciences, Cacak. The approach is primarily based on improving LMS (Learning Management Systems) or e-learning systems through modifying the electronic structure of the courses by predicting the behaviour patterns of the us...

  8. Robust Steering Vector Mismatch Techniques for Reduced Rank Adaptive Array Signal Processing

    Nguyen, Hien

    2002-01-01

    The research presented in this dissertation is on the development of advanced reduced rank adaptive signal processing for airborne radar space-time adaptive processing (STAP) and steering vector mismatch robustness. This is an important area of research in the field of airborne radar signal processing since practical STAP algorithms should be robust against various kinds of mismatch errors. The clutter return in an airborne radar has widely spread Doppler frequencies; therefore STAP, a two-di...

  9. Adapting desorption mass spectrometry and pattern recognition techniques to petroleum fluid correlation studies

    Hickey, J.C.; Durfee, S.L.

    1987-05-01

    Petroleum explorationists are often faced with determining the relationship between the products of wells completed in lithologies that may have some spatial or communicative relationship. Conventional methods of sampling and analysis are often time consuming and expensive. A new method for the sampling, analysis, and computerized data interpretation of the C2-C16 fraction of crude oil and natural gas is reported here. Controlled temperature headspace sampling of crude oils and direct pressure equilibrated natural gas exposure of carbon adsorption wires has been successfully applied to the sampling of the volatile fractions of petroleum fluids. Thermal vacuum desorption followed by mass spectrometric analysis of these volatile organic compounds is a rapid and sensitive method for obtaining detailed information of the distribution (fingerprint) of the components in a given sample; however, the resulting information is too complex for direct human interpretation. Techniques of computerized chemical pattern recognition such as principal components analysis (PCA) with graphical rotation, discriminant analysis, and similarity analysis (SIMCA) have proven useful in establishing the relationships between potentially correlated samples via the fingerprints of their volatile fractions. Studies have been conducted on multiple samples from numerous continental basins. The results of several of these studies will be presented to demonstrate the applicability of this new, rapid, cost-efficient approach to correlation studies.

  10. Wavelet Techniques Applied to Modeling Transitional/Turbulent Flows in Turbomachinery

    1996-01-01

    Computer simulation is an essential part of the design and development of jet engines for the aeropropulsion industry. Engineers concerned with calculating the flow in jet engine components, such as compressors and turbines, need simple engineering models that accurately describe the complex flow of air and gases and that allow them to quickly estimate loads, losses, temperatures, and other design parameters. In this ongoing collaborative project, advanced wavelet analysis techniques are being used to gain insight into the complex flow phenomena. These insights, which cannot be achieved by commonly used methods, are being used to develop innovative new flow models and to improve existing ones. Wavelet techniques are very suitable for analyzing the complex turbulent and transitional flows pervasive in jet engines. These flows are characterized by intermittency and a multitude of scales. Wavelet analysis results in information about these scales and their locations. The distribution of scales is equivalent to the frequency spectrum provided by commonly used Fourier analysis techniques; however, no localization information is provided by Fourier analysis. In addition, wavelet techniques allow conditional sampling analyses of the individual scales, which is not possible by Fourier methods.

  11. Review of Intelligent Techniques Applied for Classification and Preprocessing of Medical Image Data

    H S Hota

    2013-01-01

    Full Text Available Medical image data like ECG, EEG and MRI, CT-scan images are the most important way to diagnose disease of human being in precise way and widely used by the physician. Problem can be clearly identified with the help of these medical images. A robust model can classify the medical image data in better way .In this paper intelligent techniques like neural network and fuzzy logic techniques are explored for MRI medical image data to identify tumor in human brain. Also need of preprocessing of medical image data is explored. Classification technique has been used extensively in the field of medical imaging. The conventional method in medical science for medical image data classification is done by human inspection which may result misclassification of data sometime this type of problem identification are impractical for large amounts of data and noisy data, a noisy data may be produced due to some technical fault of the machine or by human errors and can lead misclassification of medical image data. We have collected number of papers based on neural network and fuzzy logic along with hybrid technique to explore the efficiency and robustness of the model for brain MRI data. It has been analyzed that intelligent model along with data preprocessing using principal component analysis (PCA and segmentation may be the competitive model in this domain.

  12. X-ray micro-beam techniques and phase contrast tomography applied to biomaterials

    Fratini, Michela; Campi, Gaetano; Bukreeva, Inna; Pelliccia, Daniele; Burghammer, Manfred; Tromba, Giuliana; Cancedda, Ranieri; Mastrogiacomo, Maddalena; Cedola, Alessia

    2015-12-01

    A deeper comprehension of the biomineralization (BM) process is at the basis of tissue engineering and regenerative medicine developments. Several in-vivo and in-vitro studies were dedicated to this purpose via the application of 2D and 3D diagnostic techniques. Here, we develop a new methodology, based on different complementary experimental techniques (X-ray phase contrast tomography, micro-X-ray diffraction and micro-X-ray fluorescence scanning technique) coupled to new analytical tools. A qualitative and quantitative structural investigation, from the atomic to the micrometric length scale, is obtained for engineered bone tissues. The high spatial resolution achieved by X-ray scanning techniques allows us to monitor the bone formation at the first-formed mineral deposit at the organic-mineral interface within a porous scaffold. This work aims at providing a full comprehension of the morphology and functionality of the biomineralization process, which is of key importance for developing new drugs for preventing and healing bone diseases and for the development of bio-inspired materials.

  13. Nuclear and conventional techniques applied to the analysis of Purhepecha metals of the Pareyon collection

    The main objective of this investigation was to determine the composition and microstructure of 13 metallic devices by means of the nuclear techniques of PIXE, RBS and conventional; which were elaborated starting from copper and gold, and they were in the offering of a tarasc personage located in the 'Matamoros' porch in Uruapan, Michoacan, Mexico. (Author)

  14. Applying the Management-by-Objectives Technique in an Industrial Library

    Stanton, Robert O.

    1975-01-01

    An experimental "management-by-objectives" performance system was operated by the Libraries and Information Systems Center of Bell Laboratories during 1973. It was found that, though the system was very effective for work planning and the development of people, difficulties were encountered in applying it to certain classes of employees. (Author)

  15. Time-lapse motion picture technique applied to the study of geological processes

    Miller, R.D.; Crandell, D.R.

    1959-01-01

    Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.

  16. Impact of adaptive proactive reconfiguration technique on Vmin and lifetime of SRAM caches

    Pouyan, Peyman; Amat Bertran, Esteve; Barajas Ojeda, Enrique; Rubio Sola, Jose Antonio

    2014-01-01

    This work presents a test and measurement technique to monitor aging and process variation status of SRAM cells as an aging-aware design technique. We have then verified our technique with an implemented chip. The obtained aging information are utilized to guide our proactive strategies, and to track the impact of aging in new reconfiguration techniques for cache memory structures. Our proactive techniques improve the reliability, extend the SRAMs lifetime, and reduce the Vmin drift in presen...

  17. Stochastic techniques applied to the smoothing treatment of bias and compression of artificial satellite tracking and telemetry data

    Orlando, V.

    1983-10-01

    Three procedures related with preprocessing of artificial satellite tracking and telemetry data, developed with the aid of stochastic techniques are presented. The first of them consists of a data smoothing procedure by curve fitting developed by the application of the Kalman filter combined with an adaptive technique of state noise evaluation. The second procedure, developed in order to allow an automatic treatment of bias errors in dynamics systems observation data, makes Kalman filter state estimation possible by direct processing of the bias errors corrupted observations. For this, a dynamic compensation scheme is used. Finally, the third procedure, classified as a data compression procedure, has the objective of obtaining, in certain conditions, a processing speed gain in Kalman filter applications to nonlinear dynamic systems state estimation. Validation tests of the procedures were made by digital computer simulation using simulated data related with a low altitude artificial satellite orbit.

  18. Modelling laser speckle photographs of decayed teeth by applying a digital image information technique

    Ansari, M. Z.; da Silva, L. C.; da Silva, J. V. P.; Deana, A. M.

    2016-09-01

    We report on the application of a digital image model to assess early carious lesions on teeth. When decay is in its early stages, the lesions were illuminated with a laser and the laser speckle images were obtained. Due to the differences in the optical properties between healthy and carious tissue, both regions produced different scatter patterns. The digital image information technique allowed us to produce colour-coded 3D surface plots of the intensity information in the speckle images, where the height (on the z-axis) and the colour in the rendering correlate with the intensity of a pixel in the image. The quantitative changes in colour component density enhance the contrast between the decayed and sound tissue, and visualization of the carious lesions become significantly evident. Therefore, the proposed technique may be adopted in the early diagnosis of carious lesions.

  19. Magnetic Resonance Techniques Applied to the Diagnosis and Treatment of Parkinson’s Disease

    de Celis Alonso, Benito; Hidalgo-Tobón, Silvia S.; Menéndez-González, Manuel; Salas-Pacheco, José; Arias-Carrión, Oscar

    2015-01-01

    Parkinson’s disease (PD) affects at least 10 million people worldwide. It is a neurodegenerative disease, which is currently diagnosed by neurological examination. No neuroimaging investigation or blood biomarker is available to aid diagnosis and prognosis. Most effort toward diagnosis using magnetic resonance (MR) has been focused on the use of structural/anatomical neuroimaging and diffusion tensor imaging (DTI). However, deep brain stimulation, a current strategy for treating PD, is guided by MR imaging (MRI). For clinical prognosis, diagnosis, and follow-up investigations, blood oxygen level-dependent MRI, DTI, spectroscopy, and transcranial magnetic stimulation have been used. These techniques represent the state of the art in the last 5 years. Here, we focus on MR techniques for the diagnosis and treatment of Parkinson’s disease. PMID:26191037

  20. Digital Image Correlation technique applied to mechanical characterisation of aluminium foam

    Palano F.

    2010-06-01

    Full Text Available In this paper the possibility to employ the Digital Image Correlation (DIC technique for the mechanical behaviour analysis of metallic foam was investigated. Image Correlation allowed to measure displacement and strain fields on closed and open cells aluminium foam specimens under two different loading conditions (compression and shear and to characterise, with opportune calculation, information on the mechanical behaviour of foams. The adopted technique is suitable to conduct a deep analysis and also to appreciate the local heterogeneities that appear in a specimen during test. The parameters obtained with DIC analysis are confirmed by the global data obtained by testing machine proving the adopted methodology represents a valid tool for the study of these new materials.

  1. Traffic Visualization:Applying Information Visualization Techniques to Enhance Traffic Planning

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti; Vatjus-Anttila, Jarkko; Quigley, Aaron

    2013-01-01

    In this paper, we present a space-time visualization to provide city’s decision-makers the ability to analyse and uncover important “city events” in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city...

  2. Discretization techniques applied to the study of multiphoton excitation of resonances in helium

    Two-photon ionization of helium is investigated by using a discretization technique. Perturbative cross sections are calculated with a L2 basis set (B-splines), the efficiency of such an approach is discussed in the context of above threshold ionization (ATI). We propose a parametrization of the cross sections in the region of resonances. The role of the correlations is discussed in a P - Q Feshbach projection treatment. (author)

  3. Vibrational techniques applied to photosynthesis: Resonance Raman and fluorescence line-narrowing.

    Gall, Andrew; Pascal, Andrew A; Robert, Bruno

    2015-01-01

    Resonance Raman spectroscopy may yield precise information on the conformation of, and the interactions assumed by, the chromophores involved in the first steps of the photosynthetic process. Selectivity is achieved via resonance with the absorption transition of the chromophore of interest. Fluorescence line-narrowing spectroscopy is a complementary technique, in that it provides the same level of information (structure, conformation, interactions), but in this case for the emitting pigment(s) only (whether isolated or in an ensemble of interacting chromophores). The selectivity provided by these vibrational techniques allows for the analysis of pigment molecules not only when they are isolated in solvents, but also when embedded in soluble or membrane proteins and even, as shown recently, in vivo. They can be used, for instance, to relate the electronic properties of these pigment molecules to their structure and/or the physical properties of their environment. These techniques are even able to follow subtle changes in chromophore conformation associated with regulatory processes. After a short introduction to the physical principles that govern resonance Raman and fluorescence line-narrowing spectroscopies, the information content of the vibrational spectra of chlorophyll and carotenoid molecules is described in this article, together with the experiments which helped in determining which structural parameter(s) each vibrational band is sensitive to. A selection of applications is then presented, in order to illustrate how these techniques have been used in the field of photosynthesis, and what type of information has been obtained. This article is part of a Special Issue entitled: Vibrational spectroscopies and bioenergetic systems. PMID:25268562

  4. Geostatistical techniques applied to mapping limnological variables and quantify the uncertainty associated with estimates

    Cristiano Cigagna

    2015-12-01

    Full Text Available Abstract Aim: This study aimed to map the concentrations of limnological variables in a reservoir employing semivariogram geostatistical techniques and Kriging estimates for unsampled locations, as well as the uncertainty calculation associated with the estimates. Methods: We established twenty-seven points distributed in a regular mesh for sampling. Then it was determined the concentrations of chlorophyll-a, total nitrogen and total phosphorus. Subsequently, a spatial variability analysis was performed and the semivariogram function was modeled for all variables and the variographic mathematical models were established. The main geostatistical estimation technique was the ordinary Kriging. The work was developed with the estimate of a heavy grid points for each variables that formed the basis of the interpolated maps. Results: Through the semivariogram analysis was possible to identify the random component as not significant for the estimation process of chlorophyll-a, and as significant for total nitrogen and total phosphorus. Geostatistical maps were produced from the Kriging for each variable and the respective standard deviations of the estimates calculated. These measurements allowed us to map the concentrations of limnological variables throughout the reservoir. The calculation of standard deviations provided the quality of the estimates and, consequently, the reliability of the final product. Conclusions: The use of the Kriging statistical technique to estimate heavy mesh points associated with the error dispersion (standard deviation of the estimate, made it possible to make quality and reliable maps of the estimated variables. Concentrations of limnological variables in general were higher in the lacustrine zone and decreased towards the riverine zone. The chlorophyll-a and total nitrogen correlated comparing the grid generated by Kriging. Although the use of Kriging is more laborious compared to other interpolation methods, this

  5. Applying Data Mining Technique For The Optimal Usage Of Neonatal Incubator

    Hagar Fady

    2012-07-01

    Full Text Available This research aims to provide intelligent tool to predict incubator Length of Stay (LOS of infants which shall increase the utilization and management of infant incubators. The data sets of Egyptian Neonatal Network (EGNN were employed and Oracle Data Miner (ODM tool was used for the analysis and prediction of data. The obtained results indicated that data mining technique is an appropriate and sufficiently sensitive method to predict required LOS of premature and ill infant.

  6. The radiation techniques of tomotherapy & intensity-modulated radiation therapy applied to lung cancer

    Zhu, Zhengfei; Fu, Xiaolong

    2015-01-01

    Radiotherapy (RT) plays an important role in the management of lung cancer. Development of radiation techniques is a possible way to improve the effect of RT by reducing toxicities through better sparing the surrounding normal tissues. This article will review the application of two forms of intensity-modulated radiation therapy (IMRT), fixed-field IMRT and helical tomotherapy (HT) in lung cancer, including dosimetric and clinical studies. The advantages and potential disadvantages of these t...

  7. Automated Boundary-Extraction and Region-Growing Techniques Applied to Solar Magnetograms

    McAteer, R. T. James; Gallagher, Peter; Ireland, Jack; Young, C Alex

    2005-01-01

    We present an automated approach to active region extraction from full disc MDI longitudinal magnetograms. This uses a region-growing technique in conjunction with boundary-extraction to define a number of enclosed contours as belonging to separate regions of magnetic significance on the solar disc. This provides an objective definition of active regions and areas of plage on the Sun. A number of parameters relating to the flare-potential of each region is discussed.

  8. An overview of seismic strengthening techniques traditionally applied in vernacular architecture

    Ortega, Javier; Vasconcelos, Graça; Pereira, Mariana

    2014-01-01

    Specific architectural elements can be identified in vernacular constructions located in regions frequently exposed to earthquakes. Such elements are the result of a seismic culture which has been empirically developed during centuries so that the vulnerability of the buildings is reduced. This research is based on the fact that vernacular architecture may bear important lessons on hazard mitigation. Traditional earthquake resistant technologies can be successfully applied to preserve and ret...

  9. Towards Applying Text Mining Techniques on Software Quality Standards and Models

    Kelemen, Zádor Dániel; Kusters, Rob; Trienekens, Jos; Balla, Katalin

    2013-01-01

    Many of quality approaches are described in hundreds of textual pages. Manual processing of information consumes plenty of resources. In this report we present a text mining approach applied on CMMI, one well known and widely known quality approach. The text mining analysis can provide a quick overview on the scope of a quality approaches. The result of the analysis could accelerate the understanding and the selection of quality approaches.

  10. A photoacoustic technique applied to detection of ethylene emissions in edible coated passion fruit

    Photoacoustic spectroscopy was applied to study the physiological behavior of passion fruit when coated with edible films. The results have shown a reduction of the ethylene emission rate. Weight loss monitoring has not shown any significant differences between the coated and uncoated passion fruit. On the other hand, slower color changes of coated samples suggest a slowdown of the ripening process in coated passion fruit.

  11. Applying the sterile insect technique to the control of insect pests

    The sterile insect technique involves the mass-rearing of insects, which are sterilized by gamma rays from a 60Co source before being released in a controlled fashion into nature. Matings between the sterile insects released and native insects produce no progeny, and so if enough of these matings occur the pest population can be controlled or even eradicated. A modification of the technique, especially suitable for the suppression of the moths and butterflies, is called the F, or inherited sterility method. In this, lower radiation doses are used such that the released males are only partially sterile (30-60%) and the females are fully sterile. When released males mate with native females some progeny are produced, but they are completely sterile. Thus, full expression of the sterility is delayed by one generation. This article describes the use of the sterile insect technique in controlling the screwworm fly, the tsetse fly, the medfly, the pink bollworm and the melon fly, and of the F1 sterility method in the eradication of local gypsy moth infestations. 18 refs, 5 figs, 1 tab

  12. Applying Data-mining techniques to study drought periods in Spain

    Belda, F.; Penades, M. C.

    2010-09-01

    Data-mining is a technique that it can be used to interact with large databases and to help in the discovery relations between parameters by extracting information from massive and multiple data archives. Drought affects many economic and social sectors, from agricultural to transportation, going through urban water deficit and the development of modern industries. With these problems and drought geographical and temporal distribution it's difficult to find a single definition of drought. Improving the understanding of the knowledge of climatic index is necessary to reduce the impacts of drought and to facilitate quick decisions regarding this problem. The main objective is to analyze drought periods from 1950 to 2009 in Spain. We use several kinds of information, different formats, sources and transmission mode. We use satellite-based Vegetation Index, dryness index for several temporal periods. We use daily and monthly precipitation and temperature data and soil moisture data from numerical weather model. We calculate mainly Standardized Precipitation Index (SPI) that it has been used amply in the bibliography. We use OLAP-Mining techniques to discovery of association rules between remote-sensing, numerical weather model and climatic index. Time series Data- Mining techniques organize data as a sequence of events, with each event having a time of recurrence, to cluster the data into groups of records or cluster with similar characteristics. Prior climatological classification is necessary if we want to study drought periods over all Spain.

  13. An efficient permeability scaling-up technique applied to the discretized flow equations

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  14. Quantification of material slippage in the iliotibial tract when applying the partial plastination clamping technique.

    Sichting, Freddy; Steinke, Hanno; Wagner, Martin F-X; Fritsch, Sebastian; Hädrich, Carsten; Hammer, Niels

    2015-09-01

    The objective of this study was to evaluate the potential of the partial plastination technique in minimizing material slippage and to discuss the effects on the tensile properties of thin dense connective tissue. The ends of twelve iliotibial tract samples were primed with polyurethane resin and covered by plastic plates to provide sufficient grip between the clamps. The central part of the samples remained in an anatomically unfixed condition. Strain data of twelve partially plastinated samples and ten samples in a completely anatomically unfixed state were obtained using uniaxial crosshead displacement and an optical image tracking technique. Testing of agreement between the strain data revealed ongoing but markedly reduced material slippage in partially plastinated samples compared to the unfixed samples. The mean measurement error introduced by material slippage was up to 18.0% in partially plastinated samples. These findings might complement existing data on measurement errors during material testing and highlight the importance of individual quantitative evaluation of errors that come along with self-made clamping techniques. PMID:26005842

  15. A Reinforcement Plate for Partially Thinned Pressure Vessel Designed to Measure the Thickness of Vessel Wall Applying Ultrasonic Technique

    It is very hard to preserve the wall thickness of the vessel because of the erosion or corrosion as time goes by. Therefore, the wall thicknesses of heaters in power plants are periodically measured using ultrasonic test. If the integrity of the wall thickness is estimated not to secure, the reinforcement plate is welled on the thinned area of the vessel. The overlay weld of the reinforcement plate on the thinned vessel is normally the fillet welding. As shown by the references, the reinforcement plate with adequate thickness does its role very well before the vessel wall is perforated due to thinning. However, the integrity of shell cannot insure because the weldment is directly applied by the shell side pressure to after the vessel wall is perforated. Therefore, it is needed to measure the thickness of thinned area under the reinforcement plate continuously for preserving integrity and planning the fabrication of replacement vessel. It is impossible to apply the ultrasonic thickness measurement technique after the reinforcement plate is welded on the shell. In this paper new reinforcement plate, which makes it possible to measure the wall thickness under the reinforcement plate applying the ultrasonic technique, is introduced. A method to evaluate the structural integrity of a fillet weldment for the reinforcement plate welded on a pressure vessel is introduced in this paper. Moreover, new reinforcement plate, which makes it possible to measure the wall thickness of pressure vessels under the reinforcement plate applying the ultrasonic technique, is introduced

  16. Background Noise Reduction in Wind Tunnels using Adaptive Noise Cancellation and Cepstral Echo Removal Techniques for Microphone Array Applications

    Spalt, Taylor B

    2010-01-01

    Two experiments were conducted to investigate Adaptive Noise Cancelling and Cepstrum echo removal post-processing techniques on acoustic data from a linear microphone array in an anechoic chamber. A point source speaker driven with white noise was used as the primary signal. The first experiment included a background speaker to provide interference noise at three different Signal-to-Noise Ratios to simulate noise propagating down a wind tunnel circuit. The second experiment contained only the...

  17. A Comparative Study between Moravec and Harris Corner Detection of Noisy Images Using Adaptive Wavelet Thresholding Technique

    Dey, Nilanjan; Nandi, Pradipti; Barman, Nilanjana; Das, Debolina; Chakraborty, Subhabrata

    2012-01-01

    In this paper a comparative study between Moravec and Harris Corner Detection has been done for obtaining features required to track and recognize objects within a noisy image. Corner detection of noisy images is a challenging task in image processing. Natural images often get corrupted by noise during acquisition and transmission. As Corner detection of these noisy images does not provide desired results, hence de-noising is required. Adaptive wavelet thresholding approach is applied for the...

  18. Assessment of ground-based monitoring techniques applied to landslide investigations

    Uhlemann, S.; Smith, A.; Chambers, J.; Dixon, N.; Dijkstra, T.; Haslam, E.; Meldrum, P.; Merritt, A.; Gunn, D.; Mackay, J.

    2016-01-01

    A landslide complex in the Whitby Mudstone Formation at Hollin Hill, North Yorkshire, UK is periodically re-activated in response to rainfall-induced pore-water pressure fluctuations. This paper compares long-term measurements (i.e., 2009-2014) obtained from a combination of monitoring techniques that have been employed together for the first time on an active landslide. The results highlight the relative performance of the different techniques, and can provide guidance for researchers and practitioners for selecting and installing appropriate monitoring techniques to assess unstable slopes. Particular attention is given to the spatial and temporal resolutions offered by the different approaches that include: Real Time Kinematic-GPS (RTK-GPS) monitoring of a ground surface marker array, conventional inclinometers, Shape Acceleration Arrays (SAA), tilt meters, active waveguides with Acoustic Emission (AE) monitoring, and piezometers. High spatial resolution information has allowed locating areas of stability and instability across a large slope. This has enabled identification of areas where further monitoring efforts should be focused. High temporal resolution information allowed the capture of 'S'-shaped slope displacement-time behaviour (i.e. phases of slope acceleration, deceleration and stability) in response to elevations in pore-water pressures. This study shows that a well-balanced suite of monitoring techniques that provides high temporal and spatial resolutions on both measurement and slope scale is necessary to fully understand failure and movement mechanisms of slopes. In the case of the Hollin Hill landslide it enabled detailed interpretation of the geomorphological processes governing landslide activity. It highlights the benefit of regularly surveying a network of GPS markers to determine areas for installation of movement monitoring techniques that offer higher resolution both temporally and spatially. The small sensitivity of tilt meter measurements

  19. Radiation treatment for the right naris in a pediatric anesthesia patient using an adaptive oral airway technique

    Radiation therapy for pediatric patients often includes the use of intravenous anesthesia with supplemental oxygen delivered via the nasal cannula. Here, we describe the use of an adaptive anesthesia technique for electron irradiation of the right naris in a preschool-aged patient treated under anesthesia. The need for an intranasal bolus plug precluded the use of standard oxygen supplementation. This novel technique required the multidisciplinary expertise of anesthesiologists, radiation therapists, medical dosimetrists, medical physicists, and radiation oncologists to ensure a safe and reproducible treatment course

  20. Radiation treatment for the right naris in a pediatric anesthesia patient using an adaptive oral airway technique

    Sponseller, Patricia, E-mail: sponselp@uw.edu; Pelly, Nicole; Trister, Andrew; Ford, Eric; Ermoian, Ralph

    2015-10-01

    Radiation therapy for pediatric patients often includes the use of intravenous anesthesia with supplemental oxygen delivered via the nasal cannula. Here, we describe the use of an adaptive anesthesia technique for electron irradiation of the right naris in a preschool-aged patient treated under anesthesia. The need for an intranasal bolus plug precluded the use of standard oxygen supplementation. This novel technique required the multidisciplinary expertise of anesthesiologists, radiation therapists, medical dosimetrists, medical physicists, and radiation oncologists to ensure a safe and reproducible treatment course.

  1. A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM

    Ijima, Yusuke; Nose, Takashi; Tachibana, Makoto; Kobayashi, Takao

    In this paper, we propose a rapid model adaptation technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signals. This technique is based on style estimation and style adaptation using a multiple-regression HMM (MRHMM). In the MRHMM, the mean parameters of the output probability density function are controlled by a low-dimensional parameter vector, called a style vector, which corresponds to a set of the explanatory variables of the multiple regression. The recognition process consists of two stages. In the first stage, the style vector that represents the emotional expression category and the intensity of its expressiveness for the input speech is estimated on a sentence-by-sentence basis. Next, the acoustic models are adapted using the estimated style vector, and then standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique in the recognition of simulated emotional speech uttered by both professional narrators and non-professional speakers.

  2. Periodic Noise Suppression from ECG Signal using Novel Adaptive Filtering Techniques

    Yogesh Sharma

    2012-03-01

    Full Text Available Electrocardiogram signal most commonly known recognized and used biomedical signal for medical examination of heart. The ECG signal is very sensitive in nature, and even if small noise mixed with original signal, the various characteristics of the signal changes, Data corrupted with noise must either filtered or discarded, filtering is important issue for design consideration of real time heart monitoring systems. Various filters used for removing the noise from ECG signals, most commonly used filters are Notch Filters, FIR filters, IIR filters, Wiener filter, Adaptive filters etc. Performance analysis shows that the best result is obtained by using Adaptive filter to remove various noises from ECG signal and get significant SNR andMSE results. In this paper a novel adaptive approach by using LMS algorithm and delay has shown whichcan be used for pre-processing of ECG signal and give appreciable result.

  3. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  4. Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow

    Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1996-12-31

    We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.

  5. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-07-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  6. The Pulsed Neutron Technique Applied to Fast Non-Multiplying Assemblies

    A nanosecond pulsed Van de Graaff accelerator has been used to study the behaviour of fast neutrons in non-multiplying metal assemblies. A pulsed neutron source technique has been utilized to measure fast non-elastic cross-sections for iron. The method employed is similar to that used to measure absorption cross-sections in thermal assemblies, with the exception that the fast decay times are of the order of nanoseconds rather than microseconds. Nanosecond bursts of monoenergetic neutrons are injected into various size iron assemblies. The neutron flux in these assemblies is observed to decay exponentially with a characteristic decay constant. The decay constant is composed of a sum of terms which represent neutron loss due to leakage and energy degradation. Energy degradation represents a neutron loss since a biased neutron detector is used. The removal term due to elastic and nonelastic scattering can be determined by measuring the decay constant as a function of assembly size. A theoretical development is presented for calculating the fraction that the elastic scattering contributes to the removal term, hence the non-elastic cross-section can be determined. The theoretical treatment for calculating the elastic contribution has been verified experimentally. The non-elastic cross-section for iron has been measured by this technique for primary neutron energies between 0.8 and 1.5 MeV. The pulsed source technique described above has been used to measure decay constants for lead slabs. The experiment approximates the assumptions which are generally made when solving the time-dependent Boltzmann transport equation (i.e. one-dimension, one-velocity). Decay constants have been measured for 28 in x 32 in lead slabs of 2, 4, 6 and 8-in thickness. The results, after being corrected for energy degradation and finite assembly, are compared with the approximate solutions of the Boltzmann transport equation. (author)

  7. Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

    Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.

  8. Evaluation of Bending Strength in Friction Welded Alumina/mild Steel Joints by Applying Factorial Technique

    Jesudoss Hynes, N. Rajesh; Nagaraj, P.; Vivek Prabhu, M.

    Joining of metal with ceramics has become significant in many applications, because they combine properties like ductility with high hardness and wear resistance. By friction welding technique, alumina can be joined to mild steel with AA1100 sheet of 1mm thickness as interlayer. In the present work, investigation of the effect of friction time on interlayer thickness reduction and bending strength is carried out by factorial design. By using ANOVA, a statistical tool, regression modeling is done. The regression model predicts the bending strength of welded ceramic/metal joints accurately with ± 2% deviation from the experimental values.

  9. Full-field speckle correlation technique as applied to blood flow monitoring

    Vilensky, M. A.; Agafonov, D. N.; Timoshina, P. A.; Shipovskaya, O. V.; Zimnyakov, D. A.; Tuchin, V. V.; Novikov, P. A.

    2011-03-01

    The results of experimental study of monitoring the microcirculation in tissue superficial layers of the internal organs at gastro-duodenal hemorrhage with the use of laser speckles contrast analysis technique are presented. The microcirculation monitoring was provided in the course of the laparotomy of rat abdominal cavity in the real time. Microscopic hemodynamics was analyzed for small intestine and stomach under different conditions (normal state, provoked ischemia, administration of vasodilative agents such as papaverine, lidocaine). The prospects and problems of internal monitoring of micro-vascular flow in clinical conditions are discussed.

  10. Short Note: Comparison of three surveying techniques applied to hydrogeological studies: level, barometer, and GPS

    Xyoli Pérez; Luis E. Marín; Rangel, E.

    1998-01-01

    The error from three different surveying techniques- level, hand-held barometer, and hand-held GPS-were compared in terms of accuracy and cost effectiveness for hydrogeological studies by surveying a four-kilometer circuit within the campus of the Universidad Nacional Autonoma de Mexico.The polygon was surveyed once with the level and the barometer, and twice with GPS. Vertical errors were 17 mm, 13 meters and 114 meters, for the level, barometer and GPS, respectively. Of the three options,...

  11. Nuclear analytical techniques applied to characterization of atmospheric aerosols in Amazon Region

    This work presents the atmospheric aerosols characterization that exist in different regions of Amazon basin. The biogenic aerosol emission by forest, as well as the atmospheric emissions of particulate materials due to biomass burning, were analyzed. Samples of aerosol particles were collected during three years in two different locations of Amazon region using Stacked Unit Filters. In order to study these samples some analytical nuclear techniques were used. The high concentrations of aerosols as a result of biomass burning process were observed in the period of june-september

  12. Some new techniques in tritium gas handling as applied to metal hydride synthesis

    A state-of-the-art tritium Hydride Synthesis System (HSS) was designed and built to replace the existing system within the Tritium Salt Facility (TSF) at the Los Alamos National Laboratory. This new hydriding system utilized unique fast-cycling 5.63 mole uranium beds (50.9 g to T2 at 100% loading) and novel gas circulating hydriding furnaces. Tritium system components discussed include fast-cycling uranium beds, circulating gas hydriding furnaces, valves, storage volumes, manifolds, gas transfer pumps, and graphic display and control consoles. Many of the tritium handling and processing techniques incorporated into this system are directly applicable to today's fusion fuel loops

  13. Some new techniques in tritium gas handling as applied to metal hydride synthesis

    A state-of-the-art tritium Hydriding Synthesis System (HSS) was designed and built to replace the existing system within the Tritium Salt Facility (TSF) at the Los Alamos National Laboratory. This new hydriding system utilizes unique fast-cycling 7.9 mole uranium beds (47.5g of T at 100% loading) and novel gas circulating hydriding furnaces. Tritium system components discussed include fast-cycling uranium beds, circulating gas hydriding furnaces, valves, storage volumes, manifolds, gas transfer pumps, and graphic display and control consoles. Many of the tritium handling and processing techniques incorporated into this system are directly applicable to today's fusion fuel loops. 12 refs., 7 figs

  14. Applying Intelligent Computing Techniques to Modeling Biological Networks from Expression Data

    Wei-Po Lee; Kung-Cheng Yang

    2008-01-01

    Constructing biological networks is one of the most important issues in system sbiology. However, constructing a network from data manually takes a considerable large amount of time, therefore an automated procedure is advocated. To automate the procedure of network construction, in this work we use two intelligent computing techniques, genetic programming and neural computation, to infer two kinds of network models that use continuous variables. To verify the presented approaches, experiments have been conducted and the preliminary results show that both approaches can be used to infer networks successfully.

  15. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-02-01

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories.

  16. Spectroscopic Techniques Applied to the Study of Italian Painted Neolithic Potteries

    In the field of cultural heritage, the study of the materials used by the artist is useful both for the knowledge of the artwork and for conservation and restoring interventions. In this communication, we present results of some decorations analysis obtained by the use of two complementary laser techniques: micro-LIBS and micro-Raman spectroscopy. With both techniques it is possible to operate in a practically nondestructive way on the artwork itself, without sampling or pretreatment. Micro-Raman spectroscopy gives information on the molecular structure of the pigments used, while micro-LIBS can give quantitative information about the elemental composition of the same materials. In this paper, qualitative results are reported obtained on the study of some Neolithic potteries coming from the archaeological site of Trasano (Matera); the fragments show decorations in different colors, red, black, and white. The aim of the study was detecting whether the colored decorations were made by using added pigments or came from the manufacturing process.

  17. A New Astrometric Technique Applied to the Likely Tidal Disruption Event, Swift J166+57

    Alianora Hounsell, Rebekah; Fruchter, Andrew S.; Levan, Andrew J.

    2015-01-01

    We have developed a new technique to align Hubble Space Telescope (HST) data using background galaxies as astrometric markers. This technique involves the cross correlation of cutouts of regions about individual galaxies from different epochs, enabling the determination of an astrometric solution. The method avoids errors introduced by proper motion when the locations of stars are used to transform the images. We have used this approach to investigate the nature of the unusual gamma-ray source Sw J1644+57, which was initially classified as a long gamma ray burst (LGRB). However, due to the object's atypical behavior in the X-ray and optical, along with its location within the host (150 ± 150 pc, see Levan et al. 2011) it has been suggested that the transient may be caused by a tidal disruption event (TDE). Additional theories have also been suggested for its origin which remain based on the collapsar model for a long burst, such as the collapse of a red giant, rather than a stripped star as is typical in LGRBs, or the creation of a magnetar.Precise astrometry of the transient with respect to the galaxy can potentially distinguish between these scenarios. Here we show that our method of alignment dramatically reduces the astrometric error of the position of the transient with respect to the nucleus of the host. We therefore discuss the implication of our result on the astrophysical nature of the object.

  18. Fragrance composition of Dendrophylax lindenii (Orchidaceae using a novel technique applied in situ

    James J. Sadler

    2012-02-01

    Full Text Available The ghost orchid, Dendrophylax lindenii (Lindley Bentham ex Rolfe (Orchidaceae, is one of North America’s rarest and well-known orchids. Native to Cuba and SW Florida where it frequents shaded swamps as an epiphyte, the species has experienced steady decline. Little information exists on D. lindenii’s biology in situ, raising conservation concerns. During the summer of 2009 at an undisclosed population in Collier County, FL, a substantial number (ca. 13 of plants initiated anthesis offering a unique opportunity to study this species in situ. We report a new technique aimed at capturing floral headspace of D. lindenii in situ, and identified volatile compounds using gas chromatography mass spectrometry (GC/MS. All components of the floral scent were identified as terpenoids with the exception of methyl salicylate. The most abundant compound was the sesquiterpene (E,E-α-farnesene (71% followed by (E-β-ocimene (9% and methyl salicylate (8%. Other compounds were: linalool (5%, sabinene (4%, (E-α-bergamotene (2%, α-pinene (1%, and 3-carene (1%. Interestingly, (E,E-α-farnesene has previously been associated with pestiferous insects (e.g., Hemiptera. The other compounds are common floral scent constituents in other angiosperms suggesting that our in situ technique was effective. Volatile capture was, therefore, possible without imposing physical harm (e.g., inflorescence detachment to this rare orchid.

  19. Hyphenated GC-FTIR and GC-MS techniques applied in the analysis of bioactive compounds

    Gosav, Steluta; Paduraru, Nicoleta; Praisler, Mirela

    2014-08-01

    The drugs of abuse, which affect human nature and cause numerous crimes, have become a serious problem throughout the world. There are hundreds of amphetamine analogues on the black market. They consist of various alterations of the basic amphetamine molecular structure, which are yet not yet included in the lists of forbidden compounds although they retain or slightly modify the hallucinogenic effects of their parent compound. It is their important variety that makes their identification quite a challenge. A number of analytical procedures for the identification of amphetamines and their analogues have recently been reported. We are presenting the profile of the main hallucinogenic amphetamines obtained with the hyphenated techniques that are recommended for the identification of illicit amphetamines, i. e. gas chromatography combined with mass spectrometry (GC-MS) and gas chromatography coupled with Fourier transform infrared spectrometry (GC-FTIR). The infrared spectra of the analyzed hallucinogenic amphetamines present some absorption bands (1490 cm-1, 1440 cm-1, 1245 cm-1, 1050 cm-1 and 940 cm-1) that are very stable as position and shape, while their intensity depends of the side-chain substitution. The specific ionic fragment of the studied hallucinogenic compounds is the 3,4-methylenedioxybenzyl cation (m/e = 135) which has a small relative abundance (lesser than 20%). The complementarity of the above mentioned techniques for the identification of hallucinogenic compounds is discussed.

  20. Gamma-radiography techniques applied to quality control of welds in water pipe lines

    Non-destructive testing of welds may be done by the gamma-radiography technique, in order to detect the presence or absence of discontinuities and defects in the bulk of deposited metal and near the base metal. Gamma-radiography allows the documentation of the test with a complete inspection record, which is a fact not common in other non-destructive testing methods. In the quality control of longitudinal or transversal welds in water pipe lines, two exposition techniques are used: double wall and panoramic exposition. Three different water pipe lines systems have analysed for weld defects, giving a total of 16,000 gamma-radiographies. The tests were made according to the criteria established by the ASME standard. The principal metallic discontinuites found in the weld were: porosity (32%), lack of penetration (29%), lack of fusion (20%), and slag inclusion (19%). The percentage of gamma-radiographies showing welds without defects was 39% (6168 gamma-radiographies). On the other hand, 53% (8502 gamma-radiographies) showed the presence of acceptable discontinuities and 8% (1330 gamma-radiographies) were rejected according to the ASME standards

  1. Predicting Performance of Schools by Applying Data Mining Techniques on Public Examination Results

    J. Macklin Abraham Navamani

    2015-02-01

    Full Text Available This study work presents a systematic analysis of various features of the higher grade school public examination results data in the state of Tamil Nadu, India through different data mining classification algorithms to predict the performance of Schools. Nowadays the parents always targets to select the right city, school and factors which contributes to the success of the results in schools of their children. There could be possible effects of factors such as Ethnic mix, Medium of study, geography could make a difference in results. The proposed work would focus on two fold factors namely Machine Learning algorithms to predict School performance with satisfying accuracy and to evaluate the data mining technique which would give better accuracy of the learning algorithms. It was found that there exist some apparent and some less noticeable attributes that demonstrate a strong correlation with student performance. Data were collected through the credible source data preparation and correlation analysis. The findings revealed that the public examinations results data was a very helpful predictor of performance of school in order to improve the result with maximum level and also improved the overall accuracy with the help of Adaboost technique.

  2. Applying Tiab’s direct synthesis technique to dilatant non-Newtonian/Newtonian fluids

    Javier Andrés Martínez

    2011-08-01

    Full Text Available Non-Newtonian fluids, such as polymer solutions, have been used by the oil industry for many years as fracturing agents and drilling mud. These solutions, which normally include thickened water and jelled fluids, are injected into the formation to enhanced oil recovery by improving sweep efficiency. It is worth noting that some heavy oils behave non-Newtonianly. Non-Newtonian fluids do not have direct proportionality between applied shear stress and shear rate and viscosity varies with shear rate depending on whether the fluid is either pseudoplastic or dilatant. Viscosity decreases as shear rate increases for the former whilst the reverse takes place for dilatants. Mathematical models of conventional fluids thus fail when applied to non-Newtonian fluids. The pressure derivative curve is introduced in this descriptive work for a dilatant fluid and its pattern was observed. Tiab’s direct synthesis (TDS methodology was used as a tool for interpreting pressure transient data to estimate effective permeability, skin factors and non-Newtonian bank radius. The methodology was successfully verified by its application to synthetic examples. Also, comparing it to pseudoplastic behavior, it was found that the radial flow regime in the Newtonian zone of dilatant fluids took longer to form regarding both the flow behavior index and consistency factor.

  3. Adapting developing country epidemiological assessment techniques to improve the quality of health needs assessments in developed countries

    Handy Deirdre

    2005-04-01

    Full Text Available Abstract Background We were commissioned to carry out three health assessments in urban areas of Dublin in Ireland. We required an epidemiologically robust method that could collect data rapidly and inexpensively. We were dealing with inadequate health information systems, weak planning data and a history of inadequate recipient involvement in health service planning. These problems had also been identified by researchers carrying out health assessments in developing countries. This paper reports our experience of adapting a cluster survey model originally developed by international organisations to assess community health needs and service coverage in developing countries and applying our adapted model to three urban areas in Dublin, Ireland Methods We adapted the model to control for socio-economic heterogeneity, to take account of the inadequate population list, to ensure a representative sample and to account for a higher prevalence of degenerative and chronic diseases. We employed formal as well as informal communication methods and adjusted data collection times to maximise participation. Results The model we adapted had the capacity to ascertain both health needs and health care delivery needs. The community participated throughout the process and members were trained and employed as data collectors. The assessments have been used by local health boards and non-governmental agencies to plan and deliver better or additional services. Conclusion We were able to carry out high quality health needs assessments in urban areas by adapting and applying a developing country health assessment method. Issues arose relating to health needs assessment as part of the planning cycle and the role of participants in the process.

  4. A comparison of new, old and future densiometic techniques as applied to volcanologic study.

    Pankhurst, Matthew; Moreland, William; Dobson, Kate; Þórðarson, Þorvaldur; Fitton, Godfrey; Lee, Peter

    2015-04-01

    The density of any material imposes a primary control upon its potential or actual physical behaviour in relation to its surrounds. It follows that a thorough understanding of the physical behaviour of dynamic, multi-component systems, such as active volcanoes, requires knowledge of the density of each component. If we are to accurately predict the physical behaviour of synthesized or natural volcanic systems, quantitative densiometric measurements are vital. The theoretical density of melt, crystals and bubble phases may be calculated using composition, structure, temperature and pressure inputs. However, measuring the density of natural, non-ideal, poly-phase materials remains problematic, especially if phase specific measurement is important. Here we compare three methods; Archimedes principle, He-displacement pycnometry and X-ray micro computed tomography (XMT) and discuss the utility and drawbacks of each in the context of modern volcanologic study. We have measured tephra, ash and lava from the 934 AD Eldgjá eruption (Iceland), and the 2010 AD Eyjafjallajökull eruption (Iceland), using each technique. These samples exhibit a range of particle sizes, phases and textures. We find that while the Archimedes method remains a useful, low-cost technique to generate whole-rock density data, relative precision is problematic at small particles sizes. Pycnometry offers a more precise whole-rock density value, at a comparable cost-per-sample. However, this technique is based upon the assumption pore spaces within the sample are equally available for gas exchange, which may or may not be the case. XMT produces 3D images, at resolutions from nm to tens of µm per voxel where X-ray attenuation is a qualitative measure of relative electron density, expressed as greyscale number/brightness (usually 16-bit). Phases and individual particles can be digitally segmented according to their greyscale and other characteristics. This represents a distinct advantage over both

  5. Mass Movement Hazards in the Mediterranean; A review on applied techniques and methodologies

    Ziade, R.; Abdallah, C.; Baghdadi, N.

    2012-04-01

    Emergent population and expansions of settlements and life-lines over hazardous areas in the Mediterranean region have largely increased the impact of Mass Movements (MM) both in industrialized and developing countries. This trend is expected to continue in the next decades due to increased urbanization and development, continued deforestation and increased regional precipitation in MM-prone areas due to changing climatic patterns. Consequently, and over the past few years, monitoring of MM has acquired great importance from the scientific community as well as the civilian one. This article begins with a discussion of the MM classification, and the different topographic, geologic, hydrologic and environmental impacting factors. The intrinsic (preconditioning) variables determine the susceptibility of MM and extrinsic factors (triggering) can induce the probability of MM occurrence. The evolution of slope instability studies is charted from geodetic or observational techniques, to geotechnical field-based origins to recent higher levels of data acquisition through Remote Sensing (RS) and Geographic Information System (GIS) techniques. Since MM detection and zoning is difficult in remote areas, RS and GIS have enabled regional studies to predominate over site-based ones where they provide multi-temporal images hence facilitate greatly MM monitoring. The unusual extent of the spectrum of MM makes it difficult to define a single methodology to establish MM hazard. Since the probability of occurrence of MM is one of the key components in making rational decisions for management of MM risk, scientists and engineers have developed physical parameters, equations and environmental process models that can be used as assessment tools for management, education, planning and legislative purposes. Assessment of MM is attained through various modeling approaches mainly divided into three main sections: quantitative/Heuristic (1:2.000-1:10.000), semi-quantitative/Statistical (1

  6. Application of adaptive neuro-fuzzy inference system techniques and artificial neural networks to predict solid oxide fuel cell performance in residential microgeneration installation

    Entchev, Evgueniy; Yang, Libing [Integrated Energy Systems Laboratory, CANMET Energy Technology Centre, 1 Haanel Dr., Ottawa, Ontario (Canada)

    2007-06-30

    This study applies adaptive neuro-fuzzy inference system (ANFIS) techniques and artificial neural network (ANN) to predict solid oxide fuel cell (SOFC) performance while supplying both heat and power to a residence. A microgeneration 5 kW{sub el} SOFC system was installed at the Canadian Centre for Housing Technology (CCHT), integrated with existing mechanical systems and connected in parallel to the grid. SOFC performance data were collected during the winter heating season and used for training of both ANN and ANFIS models. The ANN model was built on back propagation algorithm as for ANFIS model a combination of least squares method and back propagation gradient decent method were developed and applied. Both models were trained with experimental data and used to predict selective SOFC performance parameters such as fuel cell stack current, stack voltage, etc. The study revealed that both ANN and ANFIS models' predictions agreed well with variety of experimental data sets representing steady-state, start-up and shut-down operations of the SOFC system. The initial data set was subjected to detailed sensitivity analysis and statistically insignificant parameters were excluded from the training set. As a result, significant reduction of computational time was achieved without affecting models' accuracy. The study showed that adaptive models can be applied with confidence during the design process and for performance optimization of existing and newly developed solid oxide fuel cell systems. It demonstrated that by using ANN and ANFIS techniques SOFC microgeneration system's performance could be modelled with minimum time demand and with a high degree of accuracy. (author)

  7. Adaptive resource allocation technique to stochastic multimodal projects : a distributed platform implementation in JAVA

    Tereso, Anabela Pereira; Mota, João; Lameiro, Rui

    2005-01-01

    This paper presents the implementation of the dynamic programming model (introduced in a previous paper) for the resolution of the adaptive resource allocation problem in stochastic multimodal project networks. A distributed platform using an Object Oriented language, Java, is used in order to take advantage of the available computational resources.

  8. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  9. Vibroacoustic Modeling of Mechanically Coupled Structures: Artificial Spring Technique Applied to Light and Heavy Mediums

    L. Cheng

    1996-01-01

    Full Text Available This article deals with the modeling of vibrating structures immersed in both light and heavy fluids, and possible applications to noise control problems and industrial vessels containing fluids. A theoretical approach, using artificial spring systems to characterize the mechanical coupling between substructures, is extended to include fluid loading. A structure consisting of a plate-ended cylindrical shell and its enclosed acoustic cavity is analyzed. After a brief description of the proposed technique, a number of numerical results are presented. The analysis addresses the following specific issues: the coupling between the plate and the shell; the coupling between the structure and the enclosure; the possibilities and difficulties regarding internal soundproofing through modifications of the joint connections; and the effects of fluid loading on the vibration of the structure.

  10. The principles of quality assurance and quality control applied to both equipment and techniques

    QA is a management technique that, in diagnostic radiology, should be carried out to ensure the production of high quality diagnostic images for the minimum patient radiation dose. It will require a quality control programme involving the selective testing of each major system component on a regular basis. The major systems in diagnostic radiology concern X-ray production, X-ray detection, image processing and image viewing. For a given system, there are many possible variables that might be monitored, and it is important to balance the potential dose savings against the cost of monitoring. In the case of X-ray production, for example, it may be adequate to confine regular testing to AEDs, radiographic output and beam alignment, once the initial checks have been performed. A QA programme should also include a film reject analysis. In nuclear medicine, the QA programme should include checks on the dose calibrator as well as checks on the gamma cameras used for imaging. (Author)

  11. Nuclear analytical techniques applied to forensic chemistry; Aplicacion de tecnicas analiticas nucleares en quimica forense

    Nicolau, Veronica; Montoro, Silvia [Universidad Nacional del Litoral, Santa Fe (Argentina). Facultad de Ingenieria Quimica. Dept. de Quimica Analitica; Pratta, Nora; Giandomenico, Angel Di [Consejo Nacional de Investigaciones Cientificas y Tecnicas, Santa Fe (Argentina). Centro Regional de Investigaciones y Desarrollo de Santa Fe

    1999-11-01

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author) 5 refs., 3 figs., 1 tab.; e-mail: csedax e adigian at arcride.edu.ar

  12. Research on Key Techniques for Video Surveillance System Applied to Shipping Channel Management

    WANG Lin; ZHUANG Yan-bin; ZHENG Cheng-zeng

    2007-01-01

    A video patrol and inspection system is an important part of the government's shipping channel information management. This system is mainly applied to video information gathering and processing as a patrol is carried out. The system described in this paper can preview, edit, and add essential explanation messages to the collected video data. It then transfers these data and messages to a video server for the leaders and engineering and technical personnel to retrieve, play, chart, download or print. Each department of the government will use the system's functions according to that department's mission. The system can provide an effective means for managing the shipping enterprise. It also provides a valuable reference for the modernizing of waterborne shipping.

  13. Contrast cancellation technique applied to digital x-ray imaging using silicon strip detectors

    Dual-energy mammographic imaging experimental tests have been performed using a compact dichromatic imaging system based on a conventional x-ray tube, a mosaic crystal, and a 384-strip silicon detector equipped with full-custom electronics with single photon counting capability. For simulating mammal tissue, a three-component phantom, made of Plexiglass, polyethylene, and water, has been used. Images have been collected with three different pairs of x-ray energies: 16-32 keV, 18-36 keV, and 20-40 keV. A Monte Carlo simulation of the experiment has also been carried out using the MCNP-4C transport code. The Alvarez-Macovski algorithm has been applied both to experimental and simulated data to remove the contrast between two of the phantom materials so as to enhance the visibility of the third one

  14. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

    Nicoletti, Rodrigo; Santos, Ilmar

    2003-01-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication. For...... further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...... to a tilting-pad journal bearing, are analysed and discussed. Important conclusions about the application of integral controllers, responsible for changing the rotor-bearing equilibrium position and consequently the "passive" oil film damping coefficients, are achieved. Numerical results show an...

  15. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

    Nicoletti, Rodrigo; Santos, Ilmar

    2003-01-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication....... For further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...... vibration reduction of unbalance response of a rigid rotor, where the PD and the non-linear P controllers show better performance for the frequency range of study (0 to 80 Hz). The feasibility of eliminating rotor-bearing instabilities (phenomena of whirl) by using active lubrication is also investigated...

  16. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  17. A Research on Determination of Lecithin in Eggs by Applying Microwave Digestion Techniques and Spectrophotometry

    A method to quick detect concentration of lecithin in eggs, namely microwave digestion spectrophotometry, was established in this research. The homogenate of eggs was treated with absolute ethanol to eliminate phosphate protein in eggs which could possibly affect concentration of lecithin examined. A sample then received a new way of pre-treatment, called microwave digestion, before UV-Vis spectrometry was applied to examine the concentration of phosphate at 400 nm. The linear equation was A = 0.08628X (μg), the corresponding coefficient of correlation was 0.9998, the detection limit of phosphorous was 0.2μg (n=11). The content of lecithin in eggs was then obtained. According to the result, the recovery of 90% was secured; therefore the conclusion of high degree of accuracy was reached.

  18. Applying the sterile insect technique to the control of insect pests

    The sterile insect technique (SIT) is basically a novel twentieth century approach to insect birth control. It is species specific and exploits the mate seeking behaviour of the insect. The basic principle is simple. Insects are mass reared in 'factories' and sexually sterilized by gamma rays from a 60Co source. The sterile insects are then released in a controlled fashion into nature. Matings between the sterile insects released and native insects produced no progeny. If enough of these matings take place, reproduction of the pest population decreases. With continued release, the pest population can be controlled and in some cases eradicated. In the light of the many important applications of the SIT worldwide and the great potential that SIT concepts hold for insect and pest control in developing countries, two special benefits should be stressed. Of greatest significance is the fact that the SIT permits suppression and eradication of insect pests in an environmentally harmless manner. It combines nuclear techniques with genetic approaches and, in effect, replaces intensive use of chemicals in pest control. Although chemicals are used sparingly at the outset in some SIT programmes to reduce the size of the pest population before releases of sterilized insects are started, the total amount of chemicals used in an SIT programme is a mere fraction of what would be used without the SIT. It is also of great importance that the SIT is not designed strictly for the eradication of pest species but can readily be used in the suppression of insect populations. In fact, the SIT is ideally suited for use in conjunction with other agricultural pest control practices such as the use of parasites and predators, attractants and cultural controls (e.g. ploughing under or destruction of crop residues) in integrated pest management programmes to achieve control at the lowest possible price and with a minimum of chemical contamination of the environment

  19. Nuclear analytical techniques applied to the large scale measurements of atmospheric aerosols in the amazon region

    This work presents the characterization of the atmosphere aerosol collected in different places of the Amazon Basin. We studied both the biogenic emission from the forest and the particulate material which is emitted to the atmosphere due to the large scale man-made burning during the dry season. The samples were collected during a three year period at two different locations in the Amazon, namely the Alta Floresta (MT) and Serra do Navio (AP) regions, using stacked unit filters. These regions represent two different atmospheric compositions: the aerosol is dominated by the forest natural biogenic emission at Serra do Navio, while at Alta Floresta it presents an important contribution from the man-made burning during the dry season. At Alta Floresta we took samples in gold in order to characterize mercury emission to the atmosphere related to the gold prospection activity in Amazon. Airplanes were used for aerosol sampling during the 1992 and 1993 dry seasons to characterize the atmospheric aerosol contents from man-made burning in large Amazonian areas. The samples were analyzed using several nuclear analytic techniques: Particle Induced X-ray Emission for the quantitative analysis of trace elements with atomic number above 11; Particle Induced Gamma-ray Emission for the quantitative analysis of Na; and Proton Microprobe was used for the characterization of individual particles of the aerosol. Reflectancy technique was used in the black carbon quantification, gravimetric analysis to determine the total atmospheric aerosol concentration and Cold Vapor Atomic Absorption Spectroscopy for quantitative analysis of mercury in the particulate from the Alta Floresta gold shops. Ionic chromatography was used to quantify ionic contents of aerosols from the fine mode particulate samples from Serra do Navio. Multivariate statistical analysis was used in order to identify and characterize the sources of the atmospheric aerosol present in the sampled regions. (author)

  20. Time-reversal imaging techniques applied to tremor waveforms near Cholame, California to locate tectonic tremor

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2012-12-01

    Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of

  1. Prediction of Quality Features in Iberian Ham by Applying Data Mining on Data From MRI and Computer Vision Techniques

    Daniel Caballero

    2014-03-01

    Full Text Available This paper aims to predict quality features of Iberian hams by using non-destructive methods of analys is and data mining. Iberian hams were analyzed by Magn etic Resonance Imaging (MRI and Computer Vision Techniques (CVT throughout their ripening process and physico-chemical parameters from them were also measured. The obtained data were used to create an initial database. Deductive techniques ofdata mining (multiple linear regression were used to estimate new data, allowing the insertion of newrecords in the database. Predictive techniques of data mining were applied (multiple linear regression on MRI-CVT data, achieving prediction equations of weight, moisture and lipid content. Finally, data fromprediction equations were compared to data determined by physical-chemical analysis, obtaining high correlation coefficients in most cases. Therefore, data mining, MRI and CVT are suitable tools to esti mate quality traits of Iberian hams. This would improve the control of the ham processing in a non-destruct ive way.

  2. Condition monitoring and signature analysis techniques as applied to Madras Atomic Power Station (MAPS) [Paper No.: VIA - 1

    The technique of vibration signature analysis for identifying the machine troubles in their early stages is explained. The advantage is that a timely corrective action can be planned to avoid breakdowns and unplanned shutdowns. At the Madras Atomic Power Station (MAPS), this technique is applied to regularly monitor vibrations of equipment and thus is serving as a tool for doing corrective maintenance of equipment. Case studies of application of this technique to main boiler feed pumps, moderation pump motors, centrifugal chiller, ventilation system fans, thermal shield ventilation fans, filtered water pumps, emergency process sea water pumps, and antifriction bearings of MAPS are presented. Condition monitoring during commissioning and subsequent operation could indicate defects. Corrective actions which were taken are described. (M.G.B.)

  3. Performance Analysis of the Different Null Steering Techniques in the Field of Adaptive Beamforming

    Fawad Zaman; Bilal Shoaib; Zafar Ullah Khan; Shahid Mehmood

    2013-01-01

    In this study, we compare the performance of three null steering techniques using uniform linear array. These techniques include Null Steering without using Phase Shifters, Null Steering by Decoupling the Real Weights and Null Steering by Decoupling the Complex Weights. The evaluation criteria of these techniques is based on the bases of different parameters i.e., null depth, main beam width, side lobe levels, number of steerable nulls, computational complexity and number of sensors used in t...

  4. Performance Analysis of the Different Null Steering Techniques in the Field of Adaptive Beamforming

    Fawad Zaman

    2013-04-01

    Full Text Available In this study, we compare the performance of three null steering techniques using uniform linear array. These techniques include Null Steering without using Phase Shifters, Null Steering by Decoupling the Real Weights and Null Steering by Decoupling the Complex Weights. The evaluation criteria of these techniques is based on the bases of different parameters i.e., null depth, main beam width, side lobe levels, number of steerable nulls, computational complexity and number of sensors used in the array. The validity and effectiveness of these techniques is reflected by the resultant radiation pattern of the array.

  5. Use of pesticides and experience of applying radioisotope techniques in a developing country

    An evaluation is made of the use of pesticides by Panamanian farmers in a tropical environment, also covering pesticide residues in plant and animal products, man and soil. In addition, experience with radioisotope techniques is described. Chemical control is common practice among farmers. Each year, 5000 to 6000 t of pesticides are used, especially in horticulture and banana cultivation. Herbicides and insecticides predominate in terms of quantity, and fungicides in terms of frequency of application. Use of the so-called persistent organo-chlorines over the last few decades has led to the presence of residues in plant and animal products in amounts less than 2.2 and 0.1 mg/kg for DDT and lindane, respectively. An average of 11 mg of DDT per kilogram of fat has been detected in the population; about 50% of the persons handling agrochemicals showed direct exposure. Taking into account local practices and tropical conditions, an evaluation is being made of widely used pesticides (maneb, paraquat and 2,4-D) labelled with 14C. The studies have yielded additional information on the behaviour and the residues of these important additives in the environment and in fruits. (author). 3 refs, 1 fig., 5 tabs

  6. Applying satellite remote sensing technique in disastrous rainfall systems around Taiwan

    Liu, Gin-Rong; Chen, Kwan-Ru; Kuo, Tsung-Hua; Liu, Chian-Yi; Lin, Tang-Huang; Chen, Liang-De

    2016-05-01

    Many people in Asia regions have been suffering from disastrous rainfalls year by year. The rainfall from typhoons or tropical cyclones (TCs) is one of their key water supply sources, but from another perspective such TCs may also bring forth unexpected heavy rainfall, thereby causing flash floods, mudslides or other disasters. So far we cannot stop or change a TC route or intensity via present techniques. Instead, however we could significantly mitigate the possible heavy casualties and economic losses if we can earlier know a TC's formation and can estimate its rainfall amount and distribution more accurate before its landfalling. In light of these problems, this short article presents methods to detect a TC's formation as earlier and to delineate its rainfall potential pattern more accurate in advance. For this first part, the satellite-retrieved air-sea parameters are obtained and used to estimate the thermal and dynamic energy fields and variation over open oceans to delineate the high-possibility typhoon occurring ocean areas and cloud clusters. For the second part, an improved tropical rainfall potential (TRaP) model is proposed with better assumptions then the original TRaP for TC rainfall band rotations, rainfall amount estimation, and topographic effect correction, to obtain more accurate TC rainfall distributions, especially for hilly and mountainous areas, such as Taiwan.

  7. Super-ensemble techniques applied to wave forecast: performance and limitations

    F. Lenartz

    2010-06-01

    Full Text Available Nowadays, several operational ocean wave forecasts are available for a same region. These predictions may considerably differ, and to choose the best one is generally a difficult task. The super-ensemble approach, which consists in merging different forecasts and past observations into a single multi-model prediction system, is evaluated in this study. During the DART06 campaigns organized by the NATO Undersea Research Centre, four wave forecasting systems were simultaneously run in the Adriatic Sea, and significant wave height was measured at six stations as well as along the tracks of two remote sensors. This effort provided the necessary data set to compare the skills of various multi-model combination techniques. Our results indicate that a super-ensemble based on the Kalman Filter improves the forecast skills: The bias during both the hindcast and forecast periods is reduced, and the correlation coefficient is similar to that of the best individual model. The spatial extrapolation of local results is not straightforward and requires further investigation to be properly implemented.

  8. Modern Chemistry Techniques Applied to Metal Behavior and Chelation in Medical and Environmental Systems ? Final Report

    Sutton, M; Andresen, B; Burastero, S R; Chiarappa-Zucca, M L; Chinn, S C; Coronado, P R; Gash, A E; Perkins, J; Sawvel, A M; Szechenyi, S C

    2005-02-03

    This report details the research and findings generated over the course of a 3-year research project funded by Lawrence Livermore National Laboratory (LLNL) Laboratory Directed Research and Development (LDRD). Originally tasked with studying beryllium chemistry and chelation for the treatment of Chronic Beryllium Disease and environmental remediation of beryllium-contaminated environments, this work has yielded results in beryllium and uranium solubility and speciation associated with toxicology; specific and effective chelation agents for beryllium, capable of lowering beryllium tissue burden and increasing urinary excretion in mice, and dissolution of beryllium contamination at LLNL Site 300; {sup 9}Be NMR studies previously unstudied at LLNL; secondary ionization mass spec (SIMS) imaging of beryllium in spleen and lung tissue; beryllium interactions with aerogel/GAC material for environmental cleanup. The results show that chelator development using modern chemical techniques such as chemical thermodynamic modeling, was successful in identifying and utilizing tried and tested beryllium chelators for use in medical and environmental scenarios. Additionally, a study of uranium speciation in simulated biological fluids identified uranium species present in urine, gastric juice, pancreatic fluid, airway surface fluid, simulated lung fluid, bile, saliva, plasma, interstitial fluid and intracellular fluid.

  9. Hyperspectral imaging techniques applied to the monitoring of wine waste anaerobic digestion process

    Serranti, Silvia; Fabbri, Andrea; Bonifazi, Giuseppe

    2012-11-01

    An anaerobic digestion process, finalized to biogas production, is characterized by different steps involving the variation of some chemical and physical parameters related to the presence of specific biomasses as: pH, chemical oxygen demand (COD), volatile solids, nitrate (NO3-) and phosphate (PO3-). A correct process characterization requires a periodical sampling of the organic mixture in the reactor and a further analysis of the samples by traditional chemical-physical methods. Such an approach is discontinuous, time-consuming and expensive. A new analytical approach based on hyperspectral imaging in the NIR field (1000 to 1700 nm) is investigated and critically evaluated, with reference to the monitoring of wine waste anaerobic digestion process. The application of the proposed technique was addressed to identify and demonstrate the correlation existing, in terms of quality and reliability of the results, between "classical" chemical-physical parameters and spectral features of the digestate samples. Good results were obtained, ranging from a R2=0.68 and a RMSECV=12.83 mg/l for nitrate to a R2=0.90 and a RMSECV=5495.16 mg O2/l for COD. The proposed approach seems very useful in setting up innovative control strategies allowing for full, continuous control of the anaerobic digestion process.

  10. Dosimetric properties of bio minerals applied to high-dose dosimetry using the TSEE technique

    Vila, G. B.; Caldas, L. V. E., E-mail: gbvila@ipen.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    The study of the dosimetric properties such as reproducibility, the residual signal, lower detection dose, dose-response curve and fading of the thermally stimulated emission exo electronic (TSEE) signal of Brazilian bio minerals has shown that these materials present a potential use as radiation dosimeters. The reproducibility within ± 10% for oyster shell, mother-of-pearl and coral reef samples showed that the signal dispersion is small when compared with the mean value of the measurements. The study showed that the residual signal can be eliminated with a thermal treatment at 300 grades C/1 h. The lower detection dose of 9.8 Gy determined for the oyster shell samples when exposed to beta radiation and 1.6 Gy for oyster shell and mother-of-pearl samples when exposed to gamma radiation can be considered good, taking into account the high doses of this study. The materials presented linearity at the dose response curves in some ranges, but the lack of linearity in other cases presents no problem since a good mathematical description is possible. The fading study showed that the loss of TSEE signal can be minimized if the samples are protected from interferences such as light, heat and humidity. Taking into account the useful linearity range as the main dosimetric characteristic, the tiger shell and oyster shell samples are the most suitable for high-dose dosimetry using the TSEE technique. (Author)

  11. Dosimetric properties of bio minerals applied to high-dose dosimetry using the TSEE technique

    The study of the dosimetric properties such as reproducibility, the residual signal, lower detection dose, dose-response curve and fading of the thermally stimulated emission exo electronic (TSEE) signal of Brazilian bio minerals has shown that these materials present a potential use as radiation dosimeters. The reproducibility within ± 10% for oyster shell, mother-of-pearl and coral reef samples showed that the signal dispersion is small when compared with the mean value of the measurements. The study showed that the residual signal can be eliminated with a thermal treatment at 300 grades C/1 h. The lower detection dose of 9.8 Gy determined for the oyster shell samples when exposed to beta radiation and 1.6 Gy for oyster shell and mother-of-pearl samples when exposed to gamma radiation can be considered good, taking into account the high doses of this study. The materials presented linearity at the dose response curves in some ranges, but the lack of linearity in other cases presents no problem since a good mathematical description is possible. The fading study showed that the loss of TSEE signal can be minimized if the samples are protected from interferences such as light, heat and humidity. Taking into account the useful linearity range as the main dosimetric characteristic, the tiger shell and oyster shell samples are the most suitable for high-dose dosimetry using the TSEE technique. (Author)

  12. Applying advanced imaging techniques to a murine model of orthotopic osteosarcoma

    Matthew Lawrence Broadhead

    2015-08-01

    Full Text Available IntroductionReliable animal models are required to evaluate novel treatments for osteosarcoma. In this study, the aim was to implement advanced imaging techniques in a murine model of orthotopic osteosarcoma to improve disease modeling and the assessment of primary and metastatic disease.Materials and methodsIntra-tibial injection of luciferase-tagged OPGR80 murine osteosarcoma cells was performed in Balb/c nude mice. Treatment agent (pigment epithelium-derived factor; PEDF was delivered to the peritoneal cavity. Primary tumors and metastases were evaluated by in vivo bioluminescent assays, micro-computed tomography, [18F]-Fluoride-PET and [18F]-FDG-PET. Results[18F]-Fluoride-PET was more sensitive than [18F]-FDG-PET for detecting early disease. Both [18F]-Fluoride-PET and [18F]-FDG-PET showed progressive disease in the model, with 4-fold and 2-fold increases in SUV (p<0.05 by the study endpoint, respectively. In vivo bioluminescent assay showed that systemically delivered PEDF inhibited growth of primary osteosarcoma.DiscussionApplication of [18F]-Fluoride-PET and [18F]-FDG-PET to an established murine model of orthotopic osteosarcoma has improved the assessment of disease. The use of targeted imaging should prove beneficial for the evaluation of new approaches to osteosarcoma therapy.

  13. Laser granulometry: A comparative study the techniques of sieving and elutriation applied to pozzoianic materials

    Frías, M.

    1990-03-01

    Full Text Available Laser granulometry is a rapid method for determination of particle size distribution in both dry and wet phases. The present paper, diffraction technique by laser beams is an application to the granulometric studies of pozzolanic materials in suspension. Theses granulometric analysis are compared to those obtained with the Alpine pneumatic-siever and Bahco elutriator-centrifuge.

    La granulometria laser es un método rápido para determinar distribuciones de tamaños de partícula tanto en vía seca como en húmeda. En este trabajo la técnica de difracción por rayos laser se aplica al estudio granulométrico de materiales puzolánicos en suspensión. Estos análisis granulométricos se cotejan con los obtenidos con la técnica tamizador-neumático Alpine y elutriador-centrifugador Bahco.

  14. Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data

    Wolz, L; Abdalla, F B; Anderson, C M; Chang, T -C; Li, Y -C; Masui, K W; Switzer, E; Pen, U -L; Voytek, T C; Yadav, J

    2015-01-01

    We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15hr and 1hr field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013) covering about 41 square degrees at 0.6 < z < 1.0 which overlaps with the WiggleZ galaxy survey employed for the cross-correlation with the maps. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contaminations using an independent component analysis technique (fastica) and develop a description for a Fourier-based optimal weighting estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission by using the non-Gaussian nature of their probability functions. The power spectra of the intensity maps and the cross-correlation...

  15. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  16. Experiences in applying optimization techniques to configurations for the Control of Flexible Structures (COFS) program

    Walsh, Joanne L.

    1989-01-01

    Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used.

  17. Morphological analysis of the flippers in the Franciscana dolphin, Pontoporia blainvillei, applying X-ray technique.

    Del Castillo, Daniela Laura; Panebianco, María Victoria; Negri, María Fernanda; Cappozzo, Humberto Luis

    2014-07-01

    Pectoral flippers of cetaceans function to provide stability and maneuverability during locomotion. Directional asymmetry (DA) is a common feature among odontocete cetaceans, as well as sexual dimorphism (SD). For the first time DA, allometry, physical maturity, and SD of the flipper skeleton--by X-ray technique--of Pontoporia blainvillei were analyzed. The number of carpals, metacarpals, phalanges, and morphometric characters from the humerus, radius, ulna, and digit two were studied in franciscana dolphins from Buenos Aires, Argentina. The number of visible epiphyses and their degree of fusion at the proximal and distal ends of the humerus, radius, and ulna were also analyzed. The flipper skeleton was symmetrical, showing a negative allometric trend, with similar growth patterns in both sexes with the exception of the width of the radius (P ≤ 0.01). SD was found on the number of phalanges of digit two (P ≤ 0.01), ulna and digit two lengths. Females showed a higher relative ulna length and shorter relative digit two length, and the opposite occurred in males (P ≤ 0.01). Epiphyseal fusion pattern proved to be a tool to determine dolphin's age; franciscana dolphins with a mature flipper were, at least, four years old. This study indicates that the flippers of franciscana dolphins are symmetrical; both sexes show a negative allometric trend; SD is observed in radius, ulna, and digit two; and flipper skeleton allows determine the age class of the dolphins. PMID:24700648

  18. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  19. Comparison of motion correction techniques applied to functional near-infrared spectroscopy data from children

    Hu, Xiao-Su; Arredondo, Maria M.; Gomba, Megan; Confer, Nicole; DaSilva, Alexandre F.; Johnson, Timothy D.; Shalinsky, Mark; Kovelman, Ioulia

    2015-12-01

    Motion artifacts are the most significant sources of noise in the context of pediatric brain imaging designs and data analyses, especially in applications of functional near-infrared spectroscopy (fNIRS), in which it can completely affect the quality of the data acquired. Different methods have been developed to correct motion artifacts in fNIRS data, but the relative effectiveness of these methods for data from child and infant subjects (which is often found to be significantly noisier than adult data) remains largely unexplored. The issue is further complicated by the heterogeneity of fNIRS data artifacts. We compared the efficacy of the six most prevalent motion artifact correction techniques with fNIRS data acquired from children participating in a language acquisition task, including wavelet, spline interpolation, principal component analysis, moving average (MA), correlation-based signal improvement, and combination of wavelet and MA. The evaluation of five predefined metrics suggests that the MA and wavelet methods yield the best outcomes. These findings elucidate the varied nature of fNIRS data artifacts and the efficacy of artifact correction methods with pediatric populations, as well as help inform both the theory and practice of optical brain imaging analysis.

  20. COMPARATIVE PERFORMANCE MONITORING OF RAINFED WATERSHEDS APPLYING GIS AND RS TECHNIQUES

    ARUN W. DHAWALE

    2012-03-01

    Full Text Available Under the watershed development project of the Ministry of Rural Development, many micro watersheds have been identified for development and management. However Government is handicapped inobtaining data on the performance of these programmes due to the absence of watershed performance studies. Rainfed agriculture is clearly critical to agricultural performance in India. Nonetheless, it is difficult to precisely quantify the overall importance of the sector. The widely quoted statistic is that 70% of cultivated area israinfed, implying that rainfed agriculture is more important than irrigated agriculture. In the present study two rainfed micro-watersheds namely Kolvan valley and Darewadi is taken as case study for performance monitoring using GIS and RS Techniques. An attempt has been made to highlight the role of GIS and RS in estimation of runoff from both the watersheds by SCS curve number method. The methodology developed for the research show that the knowledge extracted from proposed approach can remove the problem of performance monitoring of micro watersheds to great extent. Comparative performance of both micro watersheds having extreme rainfall conditions shows that in Darewadi micro watershed overall success rate is more than Kolvan valley.

  1. A non-intrusive measurement technique applying CARS for concentration measurement in a gas mixing flow

    Yamamoto, Ken; Moriya, Madoka; Kuriyama, Reiko; Sato, Yohei

    2015-01-01

    Coherent anti-Stokes Raman scattering (CARS) microscope system was built and applied to a non-intrusive gas concentration measurement of a mixing flow in a millimeter-scale channel. Carbon dioxide and nitrogen were chosen as test fluids and CARS signals from the fluids were generated by adjusting the wavelengths of the Pump and the Stokes beams. The generated CARS signals, whose wavelengths are different from those of the Pump and the Stokes beams, were captured by an EM-CCD camera after filtering out the excitation beams. A calibration experiment was performed in order to confirm the applicability of the built-up CARS system by measuring the intensity of the CARS signal from known concentrations of the samples. After confirming that the measured CARS intensity was proportional to the second power of the concentrations as was theoretically predicted, the CARS intensities in the gas mixing flow channel were measured. Ten different measurement points were set and concentrations of both carbon dioxide and nitrog...

  2. Linear and non-linear control techniques applied to actively lubricated journal bearings

    Nicoletti, R.; Santos, I. F.

    2003-03-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can use the conventional hydrodynamic lubrication. For further reduction of shaft vibrations one can use the active lubrication action, which is based on injecting pressurized oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and a non-linear controller, applied to a tilting-pad journal bearing, are analysed and discussed. Important conclusions about the application of integral controllers, responsible for changing the rotor-bearing equilibrium position and consequently the "passive" oil film damping coefficients, are achieved. Numerical results show an effective vibration reduction of unbalance response of a rigid rotor, where the PD and the non-linear P controllers show better performance for the frequency range of study (0-80 Hz). The feasibility of eliminating rotor-bearing instabilities (phenomena of whirl) by using active lubrication is also investigated, illustrating clearly one of its most promising applications.

  3. BiasMDP: Carrier lifetime characterization technique with applied bias voltage

    A characterization method is presented, which determines fixed charge and interface defect densities in passivation layers. This method bases on a bias voltage applied to an electrode on top of the passivation layer. During a voltage sweep, the effective carrier lifetime is measured by means of microwave detected photoconductivity. When the external voltage compensates the electric field of the fixed charges, the lifetime drops to a minimum value. This minimum value correlates to the flat band voltage determined in reference impedance measurements. This correlation is measured on p-type silicon passivated by Al2O3 and Al2O3/HfO2 stacks with different fixed charge densities and layer thicknesses. Negative fixed charges with densities of 3.8 × 1012 cm−2 and 0.7 × 1012 cm−2 are determined for Al2O3 layers without and with an ultra-thin HfO2 interface, respectively. The voltage and illumination dependencies of the effective carrier lifetime are simulated with Shockley Read Hall surface recombination at continuous defects with parabolic capture cross section distributions for electrons and holes. The best match with the measured data is achieved with a very low interface defect density of 1 × 1010 eV−1 cm−2 for the Al2O3 sample with HfO2 interface

  4. PIE in Hot Cells and Poolside: Facilities and Techniques applied in Argentina. A brief overview

    Full text: Argentina has covered a wide range of activities concerning PIE, visual inspection of Fuel Elements (FE), internal components of NPP and Research Reactors (RR). These activities are performed both in the Poolside Bay (Spent Fuel Pool) at each nuclear power plant and in external Hot Cell Laboratories. Argentina has two PHWR power reactors (one CANDU type at C.N.Embalse and other KWU prototype at C.N.ATUCHA 1) which have started operation in the early 80's and 70's respectively. Argentina has also one 10 MW research reactor (RA-3) for radioisotope production for more than 40 years ago. The PIE activities have covered the basic requirement for the controls and improvements of the FE and the surveillance programs for the behavior assessment of the critical internal components of the NPP (pressure vessel or tubes, control rods, guide tubes etc) consistently. PIE activities for FE were begun with the application techniques to evaluate the fission product release in the primary circuit and localization of failed FE in the core, on-line Sipping Test and exhaustive underwater visual inspection including metrology of the dimensional changes of the components. In some cases it was necessary to make available the equipment for disassembling the fuel element for further analysis. The other cases involved the studies including the determination of the cause of the primary failure to discriminate among fabrication flaws or flaws related to PCI or with the operation outside the design range. The Hot Cells Laboratory is divided in two installations i.e. the Physical Hot Cells and the Radiochemical Hot Cells The Physical Hot Cells (CELCA) consist of one beta-gamma cell for structural materials with five working positions and two alpha tight boxes for fuel material testing with four working positions. An Optical Microscopy bench and Scanning Electronic Microscope are also exited in these cells. The following destructive tests for PIE are available: - Metallurgical Test

  5. An Optimized Technique of Increasing the Performance of Network Adapter on EML Layer

    Prashanth L

    2012-08-01

    Full Text Available Simple Network Adapter initially which acts as an interface between the Transaction server and Network Elements communicates over the channel through tcppdu. Presently the disadvantage being involved in tcppdu is to maintain the channel contention, reservation of channel bandwidth. The disadvantage being involved is certain features, version of network elements communicates by receiving the xml over the socket. So, it’s not possible to change the entire framework, but by updating the framework an XML Over Socket(XOS formation should be supported. The XOS implementation is being performed using Java language through mainly in JVM. Such that by this deployment machines would become easier and form a good communication gap between them. This simple network adapter being developed should support operations of the North bounded server and gives an established authorized, secured, reliable portal. The interface being developed should provide a good performance in meeting the network demands and operated conversions of respective objects

  6. BiasMDP: Carrier lifetime characterization technique with applied bias voltage

    Jordan, Paul M., E-mail: paul.jordan@namlab.com; Simon, Daniel K.; Dirnstorfer, Ingo [Nanoelectronic Materials Laboratory gGmbH (NaMLab), Nöthnitzer Straße 64, 01187 Dresden (Germany); Mikolajick, Thomas [Nanoelectronic Materials Laboratory gGmbH (NaMLab), Nöthnitzer Straße 64, 01187 Dresden (Germany); Technische Universität Dresden, Institut für Halbleiter- und Mikrosystemtechnik, 01062 Dresden (Germany)

    2015-02-09

    A characterization method is presented, which determines fixed charge and interface defect densities in passivation layers. This method bases on a bias voltage applied to an electrode on top of the passivation layer. During a voltage sweep, the effective carrier lifetime is measured by means of microwave detected photoconductivity. When the external voltage compensates the electric field of the fixed charges, the lifetime drops to a minimum value. This minimum value correlates to the flat band voltage determined in reference impedance measurements. This correlation is measured on p-type silicon passivated by Al{sub 2}O{sub 3} and Al{sub 2}O{sub 3}/HfO{sub 2} stacks with different fixed charge densities and layer thicknesses. Negative fixed charges with densities of 3.8 × 10{sup 12 }cm{sup −2} and 0.7 × 10{sup 12 }cm{sup −2} are determined for Al{sub 2}O{sub 3} layers without and with an ultra-thin HfO{sub 2} interface, respectively. The voltage and illumination dependencies of the effective carrier lifetime are simulated with Shockley Read Hall surface recombination at continuous defects with parabolic capture cross section distributions for electrons and holes. The best match with the measured data is achieved with a very low interface defect density of 1 × 10{sup 10 }eV{sup −1} cm{sup −2} for the Al{sub 2}O{sub 3} sample with HfO{sub 2} interface.

  7. An empirical comparison of stock identification techniques applied to striped bass

    Waldman, John R.; Richards, R. Anne; Schill, W. Bane; Wirgin, Isaac; Fabrizio, Mary C.

    1997-01-01

    Managers of migratory striped bass stocks that mix along the Atlantic coast of the USA require periodic estimates of the relative contributions of the individual stocks to coastal mixed- stock fisheries; however, to date, a standard approach has not been adopted. We compared the performances of alternative stock identification approaches, using samples taken from the same sets of fish. Reference (known) samples were collected from three Atlantic coast spawning systems: the Hudson River, Chesapeake Bay, and the Roanoke River. Striped bass of mixed-stock origin were collected from eastern Long Island, New York, and were used as test (unknown) samples. The approaches applied were discriminant analysis of morphometric data and of meristic data, logistic regression analysis of combined meristic and morphometric data, discriminant analysis of scale-shape features, discriminant analysis of immunoassay data, and mixed-stock analysis of mitochondrial DNA (mtDNA) data. Overall correct classification rates of reference samples ranged from 94% to 66% when just the Hudson and Chesapeake stocks were considered and were comparable when the Chesapeake and Roanoke stocks were grouped as the ''southern'' stock. When all three stocks were treated independently, correct classification rates ranged from 82% to 49%. Despite the moderate range in correct classification rates, bias due to misallocation was relatively low for all methods, suggesting that resulting stock composition estimates should be fairly accurate. However, relative contribution estimates for the mixed-stock sample varied widely (e.g., from 81% to 47% for the Hudson River stock, when only the Hudson River and Chesapeake Bay stocks were considered). Discrepancies may be related to the reliance by all of these approaches (except mtDNA) on phenotypic features. Our results support future use of either a morphometrics-based approach (among the phenotypic methods) or a genotypic approach based on mtDNA analysis. We further

  8. Adaptive and Predictive Control of Liquid-Liquid Extractors Using Neural-Based Instantaneous Linearization Technique

    Mjalli, F. S.

    2006-01-01

    Nonlinearity of the extraction process is addressed via the application of instantaneous linearization to control the extract and raffinate concentrations. Two feed-forward neural networks with delayed inputs and outputs were trained and validated to capture the dynamics of the extraction process. These nonlinear models were then adopted in an instantaneous linearization algorithm into two control algorithms. The self-tuning adaptive control strategy was compared to an approximate model predi...

  9. Using adaptive techniques to validate and correct an audience driven design of web sites

    Casteleyn, Sven; Garrigós Fernández, Irene; De Troyer, Olga

    2004-01-01

    An audience driven philosophy for web site design takes the different target audiences as an explicit starting point, and organizes the basic navigation structure accordingly. However, for the designer it is not always easy, or sometimes even impossible, to assess the different requirements of the different target audiences correctly. In this paper, we show how to correct for such possible flaws using adaptive behavior. A mechanism for detecting both missing and superfluous information in a c...

  10. Fuzzy-Based Adaptive Hybrid Burst Assembly Technique for Optical Burst Switched Networks

    Abubakar Muhammad Umaru; Muhammad Shafie Abd Latiff; Yahaya Coulibaly

    2014-01-01

    The optical burst switching (OBS) paradigm is perceived as an intermediate switching technology for future all-optical networks. Burst assembly that is the first process in OBS is the focus of this paper. In this paper, an intelligent hybrid burst assembly algorithm that is based on fuzzy logic is proposed. The new algorithm is evaluated against the traditional hybrid burst assembly algorithm and the fuzzy adaptive threshold (FAT) burst assembly algorithm via simulation. Simulation results sh...

  11. An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy

    Collis, Peter

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…

  12. A technique for improved stability of adaptive feedforward controllers without detailed uncertainty measurements

    Model errors in adaptive controllers for the reduction of broadband noise and vibrations may lead to unstable systems or increased error signals. Previous research on active structures with small damping has shown that the addition of a low-authority controller which increases damping in the system may lead to improved performance of an adaptive, high-authority controller. Other researchers have suggested the use of frequency dependent regularization based on measured uncertainties. In this paper an alternative method is presented that avoids the disadvantages of these methods, namely the additional complex hardware and the need to obtain detailed information on the uncertainties. An analysis is made of an adaptive feedforward controller in which a difference exists between the secondary path and the model as used in the controller. The real parts of the eigenvalues that determine the stability of the system are expressed in terms of the amount of uncertainty and the singular values of the secondary path. Modifications of the feedforward control scheme are suggested that aim to improve performance without requiring detailed uncertainty measurements. (paper)

  13. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    Roseman, Edward F.; Kennedy, Gregory W.; Craig, Jaquelyn; Boase, James; Soper, Karen

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8 m) Detroit River using an anchor and buoy system.

  14. Feature-Based Adaptive Tolerance Tree (FATT): An Efficient Indexing Technique for Content-Based Image Retrieval Using Wavelet Transform

    AnandhaKumar, Dr P

    2010-01-01

    This paper introduces a novel indexing and access method, called Feature- Based Adaptive Tolerance Tree (FATT), using wavelet transform is proposed to organize large image data sets efficiently and to support popular image access mechanisms like Content Based Image Retrieval (CBIR).Conventional database systems are designed for managing textual and numerical data and retrieving such data is often based on simple comparisons of text or numerical values. However, this method is no longer adequate for images, since the digital presentation of images does not convey the reality of images. Retrieval of images become difficult when the database is very large. This paper addresses such problems and presents a novel indexing technique, Feature Based Adaptive Tolerance Tree (FATT), which is designed to bring an effective solution especially for indexing large databases. The proposed indexing scheme is then used along with a query by image content, in order to achieve the ultimate goal from the user point of view that ...

  15. Breathing adapted radiotherapy: final clinic results of the program for the support to costly innovating techniques (Stic) of 2003

    The authors report the comparison, from a clinic point of view, between breathing adapted conformational radiotherapy (BART) and conventional conformational radiotherapy, in the case of lung and breast cancers. The assessment comprised a clinic examination, a thoracic radiography, breathing functional tests, a thoracic scanography at different moments (3, 6, 12, 18 and 24 months), and dosimetric criteria for tumour target volumes and the different thoracic organs at risk. Data have been collected among more than six hundred patients. Breathing adapted techniques allow acute and late toxicity to be reduced, notably for the lung, heart and oesophagus during a lung irradiation. They are less interesting for mammary irradiation, but could be important for a radiotherapy of the left breast. Short communication

  16. APPLIED PHYTO-REMEDIATION TECHNIQUES USING HALOPHYTES FOR OIL AND BRINE SPILL SCARS

    M.L. Korphage; Bruce G. Langhus; Scott Campbell

    2003-03-01

    Produced salt water from historical oil and gas production was often managed with inadequate care and unfortunate consequences. In Kansas, the production practices in the 1930's and 1940's--before statewide anti-pollution laws--were such that fluids were often produced to surface impoundments where the oil would segregate from the salt water. The oil was pumped off the pits and the salt water was able to infiltrate into the subsurface soil zones and underlying bedrock. Over the years, oil producing practices were changed so that segregation of fluids was accomplished in steel tanks and salt water was isolated from the natural environment. But before that could happen, significant areas of the state were scarred by salt water. These areas are now in need of economical remediation. Remediation of salt scarred land can be facilitated with soil amendments, land management, and selection of appropriate salt tolerant plants. Current research on the salt scars around the old Leon Waterflood, in Butler County, Kansas show the relative efficiency of remediation options. Based upon these research findings, it is possible to recommend cost efficient remediation techniques for slight, medium, and heavy salt water damaged soil. Slight salt damage includes soils with Electrical Conductivity (EC) values of 4.0 mS/cm or less. Operators can treat these soils with sufficient amounts of gypsum, install irrigation systems, and till the soil. Appropriate plants can be introduced via transplants or seeded. Medium salt damage includes soils with EC values between 4.0 and 16 mS/cm. Operators will add amendments of gypsum, till the soil, and arrange for irrigation. Some particularly salt tolerant plants can be added but most planting ought to be reserved until the second season of remediation. Severe salt damage includes soil with EC values in excess of 16 mS/cm. Operators will add at least part of the gypsum required, till the soil, and arrange for irrigation. The following

  17. Statistical Mechanics Ideas and Techniques Applied to Selected Problems in Ecology

    Hugo Fort

    2013-11-01

    Full Text Available Ecosystem dynamics provides an interesting arena for the application of a plethora concepts and techniques from statistical mechanics. Here I review three examples corresponding each one to an important problem in ecology. First, I start with an analytical derivation of clumpy patterns for species relative abundances (SRA empirically observed in several ecological communities involving a high number n of species, a phenomenon which have puzzled ecologists for decades. An interesting point is that this derivation uses results obtained from a statistical mechanics model for ferromagnets. Second, going beyond the mean field approximation, I study the spatial version of a popular ecological model involving just one species representing vegetation. The goal is to address the phenomena of catastrophic shifts—gradual cumulative variations in some control parameter that suddenly lead to an abrupt change in the system—illustrating it by means of the process of desertification of arid lands. The focus is on the aggregation processes and the effects of diffusion that combined lead to the formation of non trivial spatial vegetation patterns. It is shown that different quantities—like the variance, the two-point correlation function and the patchiness—may serve as early warnings for the desertification of arid lands. Remarkably, in the onset of a desertification transition the distribution of vegetation patches exhibits scale invariance typical of many physical systems in the vicinity a phase transition. I comment on similarities of and differences between these catastrophic shifts and paradigmatic thermodynamic phase transitions like the liquid-vapor change of state for a fluid. Third, I analyze the case of many species interacting in space. I choose tropical forests, which are mega-diverse ecosystems that exhibit remarkable dynamics. Therefore these ecosystems represent a research paradigm both for studies of complex systems dynamics as well as to

  18. Lipase immobilized by different techniques on various support materials applied in oil hydrolysis

    VILMA MINOVSKA

    2005-04-01

    Full Text Available Batch hydrolysis of olive oil was performed by Candida rugosa lipase immobilized on Amberlite IRC-50 and Al2O3. These two supports were selected out of 16 carriers: inorganic materials (sand, silica gel, infusorial earth, Al2O3, inorganic salts (CaCO3, CaSO4, ion-exchange resins (Amberlite IRC-50 and IR-4B, Dowex 2X8, a natural resin (colophony, a natural biopolymer (sodium alginate, synthetic polymers (polypropylene, polyethylene and zeolites. Lipase immobilization was carried out by simple adsorption, adsorption followed by cross-linking, adsorption on ion-exchange resins, combined adsorption and precipitation, pure precipitation and gel entrapment. The suitability of the supports and techniques for the immobilization of lipase was evaluated by estimating the enzyme activity, protein loading, immobilization efficiency and reusability of the immobilizates. Most of the immobilizates exhibited either a low enzyme activity or difficulties during the hydrolytic reaction. Only those prepared by ionic adsorption on Amberlite IRC-50 and by combined adsorption and precipitation on Al2O3 showed better activity, 2000 and 430 U/g support, respectively, and demonstrated satisfactory behavior when used repeatedly. The hydrolysis was studied as a function of several parameters: surfactant concentration, enzyme concentration, pH and temperature. The immobilized preparation with Amberlite IRC-50 was stable and active in the whole range of pH (4 to 9 and temperature (20 to 50 °C, demonstrating a 99% degree of hydrolysis. In repeated usage, it was stable and active having a half-life of 16 batches, which corresponds to an operation time of 384 h. Its storage stability was remarkable too, since after 9 months it had lost only 25 % of the initial activity. The immobilizate with Al22O3 was less stable and less active. At optimal environmental conditions, the degree of hydrolysis did not exceed 79 %. In repeated usage, after the fourth batch, the degree of

  19. Radiation-induced signals of gypsum crystals analysed by ESR and TL techniques applied to dating

    Aydas, Canan [Turkish Atomic Energy Authority, Saraykoey Nuclear Research and Training Center, 06983 Saray-Kazan, Ankara (Turkey); Engin, Birol, E-mail: birol_engin65@yahoo.co [Turkish Atomic Energy Authority, Saraykoey Nuclear Research and Training Center, 06983 Saray-Kazan, Ankara (Turkey); Aydin, Talat [Turkish Atomic Energy Authority, Saraykoey Nuclear Research and Training Center, 06983 Saray-Kazan, Ankara (Turkey)

    2011-02-15

    Natural crystals of terrestrial gypsum were investigated concerning the radiation effects on Electron spin resonance (ESR) and Thermoluminescence (TL) properties and their application for geological dating. ESR signals of Fe{sup 3+}, Mn{sup 2+}, G1 (SO{sub 3}{sup -}, g = 2.003) and G2 (SO{sub 4}{sup -}, g{sub ||} =2.018g{sub perpendicular} =2.009) centers were observed. The thermal stability and dose response of the ESR signals were found to be suitable for an age determination using a signal at g = 2.009. The intensity of this center increased with {gamma}-radiation and the additive dose method for this ESR center yielded accumulated dose GD of 67.4 {+-} 10.1 Gy. Using U, Th and K contents plus the cosmic-ray contribution, a dose rate of 1.92 {+-} 0.22 mGy/year has been obtained. We have determined the ESR age of the gypsums to be (35 {+-} 4) x 10{sup 3} years. TL peaks at 157 and 278 {sup o}C were observed. By using initial rise method the thermal activation energy of 278 {sup o}C TL peak was found to be underestimated, probably due to the thermal quenching. Activation energies and frequency factors obtained by the method of varying the heating rate indicate lifetime of 4.09 x 10{sup 7} years (at 15 {sup o}C) for 278 {sup o}C peak. The additive dose method applied to this TL peak yielded GD of 75 {+-} 11 Gy. The corresponding TL age using the 278 {sup o}C TL peak was found to be (39 {+-} 5) x 10{sup 3} years for gypsum sample. The TL age of this sample is consistent with the ESR age within experimental error limits. The obtained ESR and TL ages are not consistent with the expectations of geologists. This contradiction is probably due to the repeatedly recrystallisation of gypsum samples under the environmental conditions after their formation in the upper Miocene-Pliocene Epoch.

  20. Radiation-induced signals of gypsum crystals analysed by ESR and TL techniques applied to dating

    Aydaş, Canan; Engin, Birol; Aydın, Talat

    2011-02-01

    Natural crystals of terrestrial gypsum were investigated concerning the radiation effects on Electron spin resonance (ESR) and Thermoluminescence (TL) properties and their application for geological dating. ESR signals of Fe 3+, Mn 2+, G1 ( SO3-, g = 2.003) and G2 ( SO4-, g∥=2.018g⊥=2.009) centers were observed. The thermal stability and dose response of the ESR signals were found to be suitable for an age determination using a signal at g = 2.009. The intensity of this center increased with γ-radiation and the additive dose method for this ESR center yielded accumulated dose GD of 67.4 ± 10.1 Gy. Using U, Th and K contents plus the cosmic-ray contribution, a dose rate of 1.92 ± 0.22 mGy/year has been obtained. We have determined the ESR age of the gypsums to be (35 ± 4) × 10 3 years. TL peaks at 157 and 278 °C were observed. By using initial rise method the thermal activation energy of 278 °C TL peak was found to be underestimated, probably due to the thermal quenching. Activation energies and frequency factors obtained by the method of varying the heating rate indicate lifetime of 4.09 × 10 7 years (at 15 °C) for 278 °C peak. The additive dose method applied to this TL peak yielded GD of 75 ± 11 Gy. The corresponding TL age using the 278 °C TL peak was found to be (39 ± 5) × 10 3 years for gypsum sample. The TL age of this sample is consistent with the ESR age within experimental error limits. The obtained ESR and TL ages are not consistent with the expectations of geologists. This contradiction is probably due to the repeatedly recrystallisation of gypsum samples under the environmental conditions after their formation in the upper Miocene-Pliocene Epoch.

  1. Research recommendations for applying vitamin A-labelled isotope dilution techniques to improve human vitamin A nutrition.

    Tanumihardjo, Sherry A; Kurpad, Anura V; Hunt, Janet R

    2014-01-01

    The current use of serum retinol concentrations as a measurement of subclinical vitamin A deficiency is unsatisfactory for many reasons. The best technique available for vitamin A status assessment in humans is the measurement of total body pool size. Pool size is measured by the administration of retinol labelled with stable isotopes of carbon or hydrogen that are safe for human subjects, with subsequent measurement of the dilution of the labelled retinol within the body pool. However, the isotope techniques are time-consuming, technically challenging, and relatively expensive. There is also a need to assess different types of tracers and doses, and to establish clear guidelines for the use and interpretation of this method in different populations. Field-friendly improvements are desirable to encourage the application of this technique in developing countries where the need is greatest for monitoring the risk of vitamin A deficiency, the effectiveness of public health interventions, and the potential of hypervitaminosis due to combined supplement and fortification programs. These techniques should be applied to validate other less technical methods of assessing vitamin A deficiency. Another area of public health relevance for this technique is to understand the bioconversion of β-carotene to vitamin A, and its relation to existing vitamin A status, for future dietary diversification programs. PMID:25537106

  2. A New Optimized Data Clustering Technique using Cellular Automata and Adaptive Central Force Optimization (ACFO

    G. Srinivasa Rao

    2015-06-01

    Full Text Available As clustering techniques are gaining more important today, we propose a new clustering technique by means of ACFO and cellular automata. The cellular automata uniquely characterizes the condition of a cell at a specific moment by employing the data like the conditions of a reference cell together with its adjoining cell, total number of cells, restraint, transition function and neighbourhood calculation. With an eye on explaining the condition of the cell, morphological functions are executed on the image. In accordance with the four stages of the morphological process, the rural and the urban areas are grouped separately. In order to steer clear of the stochastic turbulences, the threshold is optimized by means of the ACFO. The test outcomes obtained vouchsafe superb performance of the innovative technique. The accomplishment of the new-fangled technique is assessed by using additional number of images and is contrasted with the traditional methods like CFO (Central Force Optimization and PSO (Particle Swarm Optimization.

  3. Scrap Cans Assayed in 55-Gallon Drums by Adapted Q2 Technique

    Salaymeh, S.R.

    2001-07-24

    This report describes an alternate assay technique developed to perform batch nondestructive assay (NDA) of ten scrap cans at a time. This report also discusses and compares the results of the one batch of ten scrap cans by assaying them individually at the 324-M assay station with the alternate assay technique developed to perform batch NDA of ten scrap cans at a time using the Q2.

  4. Low Bit-Rate Image Compression using Adaptive Down-Sampling technique

    V.Swathi; Prof. K ASHOK BABU

    2011-01-01

    In this paper, we are going to use a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass pre-filtering. The resulting down-sampled pre-filtered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then up-converts it to the original resolut...

  5. A comparative study between a high-gain interconnected observer and an adaptive observer applied to IM-based WECS

    Naifar, Omar; Boukettaya, Ghada; Oualha, Abdelmajid; Ouali, Abderrazak

    2015-05-01

    This paper is devoted to the investigation of the potentialities of induction motor sensorless strategies in speed control applications. A comparison study is carried out between two observation approaches dedicated to speed control strategies of induction machine (IM)-based wind energy conversion systems (WECS) under parametric variations, such as: i) the adaptive observer approach, which is based on the speed adaptation law and ii) the interconnected observer, that offers robustness and stability of the system with reduced CPU time. The comparison study is achieved considering four performance criteria: stability, robustness with respect to the variations of the machine inductances, robustness with respect to the variations of the machine resistances, feasibility of the torque estimation. It has been found that the introduced interconnected observer exhibits a higher performance than the traditional adaptive one, with respect to the above-cited comparison criteria.

  6. Adaptive Analog-to-Digital Conversion and pre-correlation Interference Mitigation Techniques in a GNSS receiver

    Lotz, Thorsten

    2008-01-01

    The objective of this diploma thesis was the development of pre-correlation interference mitigation techniques for a GNSS receiver. Since these developed algorithms shall be implemented in the real world DLR Galileo receiver, some pre-defined parameters were given that respects specifics of the hardware on which these algorithms shall work. So, a ideal 20 dB AGC and a ideal 14 bit ADC were available for the adaptive A/D-conversion, gain steering should be performed on the ADC o...

  7. Técnicas quirúrgicas periodontales aplicadas a la implantología Periodontal surgical techniques applied to implantology

    L Mateos

    2003-08-01

    Full Text Available La similitud morfológica y funcional existente entre los tejidos periimplantarios y los tejidos periodontales ha permitido adaptar técnicas de uso habitual en periodoncia al campo de la implantologia. El manejo de los tejidos periimplantarios de forma correcta buscando como objetivo el mejorar el entorno periimplantario, tanto con fines estéticos como para facilitar el correcto mantenimiento, es una práctica habitual hoy en día en la terapia implantológica. El objetivo de este artículo es revisar la bibliografía referente a estos conceptos y las dístintas técnicas quirúrgicas empleadas en la terapia periodontal que han sido aplicadas en implantologia.Both periodontal and periimplant tissues share morphological and functional characteristics. This allows adapting common used periodontal techniques to the implantology. Nowadays, it is a normal practice to manage the soft periimplant tissues in a correct way, in order to improve the periimplant environment. The aim of this article is to make a literature review of all these concepts as well as the application of some periodontal techniques to the field of the implantology.

  8. Adaptive critic learning techniques for engine torque and air-fuel ratio control.

    Liu, Derong; Javaherian, Hossein; Kovalenko, Olesia; Huang, Ting

    2008-08-01

    A new approach for engine calibration and control is proposed. In this paper, we present our research results on the implementation of adaptive critic designs for self-learning control of automotive engines. A class of adaptive critic designs that can be classified as (model-free) action-dependent heuristic dynamic programming is used in this research project. The goals of the present learning control design for automotive engines include improved performance, reduced emissions, and maintained optimum performance under various operating conditions. Using the data from a test vehicle with a V8 engine, we developed a neural network model of the engine and neural network controllers based on the idea of approximate dynamic programming to achieve optimal control. We have developed and simulated self-learning neural network controllers for both engine torque (TRQ) and exhaust air-fuel ratio (AFR) control. The goal of TRQ control and AFR control is to track the commanded values. For both control problems, excellent neural network controller transient performance has been achieved. PMID:18632389

  9. Low Bit-Rate Image Compression using Adaptive Down-Sampling technique

    V.Swathi

    2011-09-01

    Full Text Available In this paper, we are going to use a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass pre-filtering. The resulting down-sampled pre-filtered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then up-converts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass pre-filtering. The proposed compression approach of collaborative adaptive down-sampling and up-conversion (CADU outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that over-sampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

  10. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  11. Base isolation technique for tokamak type fusion reactor using adaptive control

    In this paper relating to the isolation device of heavy structure such as nuclear fusion reactor, a control rule for reducing the response acceleration and relative displacement simultaneously was formulated, and the aseismic performance was improved by employing the adaptive control method of changing the damping factors of the system adaptively every moment. The control rule was studied by computer simulation, and the aseismic effect was evaluated in an experiment employing a scale model. As a results, the following conclusions were obtained. (1) By employing the control rule presented in this paper, both absolute acceleration and relative displacement can be reduced simultaneously without making the system unstable. (2) By introducing this control rule in a scale model assuming the Tokamak type fusion reactor, the response acceleration can be suppressed down to 78 % and also the relative displacement to 79 % as compared with the conventional aseismic method. (3) The sensitivities of absolute acceleration and relative displacement with respect to the control gain are not equal. However, by employing the relative weighting factor between the absolute acceleration and relative displacement, it is possible to increase the control capability for any kind of objective structures and appliances. (author)

  12. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant. PMID:26321351

  13. Pattern-recognition techniques applied to performance monitoring of the DSS 13 34-meter antenna control assembly

    Mellstrom, J. A.; Smyth, P.

    1991-01-01

    The results of applying pattern recognition techniques to diagnose fault conditions in the pointing system of one of the Deep Space network's large antennas, the DSS 13 34-meter structure, are discussed. A previous article described an experiment whereby a neural network technique was used to identify fault classes by using data obtained from a simulation model of the Deep Space Network (DSN) 70-meter antenna system. Described here is the extension of these classification techniques to the analysis of real data from the field. The general architecture and philosophy of an autonomous monitoring paradigm is described and classification results are discussed and analyzed in this context. Key features of this approach include a probabilistic time-varying context model, the effective integration of signal processing and system identification techniques with pattern recognition algorithms, and the ability to calibrate the system given limited amounts of training data. Reported here are recognition accuracies in the 97 to 98 percent range for the particular fault classes included in the experiments.

  14. A novel technique of in situ phase-shift interferometry applied for faint dissolution of bulky montmorillonite in alkaline solution

    The effect of alkaline pH on the dissolution rate of bulky aggregated montmorillonite samples at 23°C was investigated for the first time by using an enhanced phase-shift interferometry technique combined with an internal refraction interferometry method developed for this study. This technique was applied to provide a molecular resolution during the optical observation of the dissolution phenomena in real time and in situ while remaining noninvasive. A theoretical normal resolution limit of this technique was 0.78 nm in water for opaque material, but was limited to 6.6 nm for montmorillonite due to the transparency of the montmorillonite crystal. Normal dissolution velocities as low as 1 × 10-4 to 1 × 10-3 nm/s were obtained directly by using the measured temporal change in height of montmorillonite samples set in a reaction cell. The molar dissolution fluxes of montmorillonite obtained in this study gave considerably faster dissolution rates in comparison to those obtained in previous investigations by solution analysis methods. The pH dependence of montmorillonite dissolution rate determined in this study was qualitatively in good agreement with those reported in the previous investigations. The dissolution rates to be used in safety assessments of geological repositories for radioactive wastes should be obtained for bulky samples. This goal has been difficult to achieve using conventional powder experiment technique and solution analysis method, but has been shown to be feasible using the enhanced phase-shift interferometry. (author)

  15. Different perceptions of adaptation to climate change: a mental model approach applied to the evidence from expert interviews

    Otto-Banaszak, I.; Matczak, P.; Wesseler, J.H.H.; Wechsung, F.

    2011-01-01

    We argue that differences in the perception and governance of adaptation to climate change and extreme weather events are related to sets of beliefs and concepts through which people understand the environment and which are used to solve the problems they face (mental models). Using data gathered in

  16. Occlusion Culling Algorithm Using Prefetching and Adaptive Level of Detail Technique

    ZHENG Fu-ren; ZHAN Shou-yi; Yang Bing

    2006-01-01

    A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered. View-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value.

  17. Path Integral Molecular Dynamics within the Grand Canonical-like Adaptive Resolution Technique: Quantum-Classical Simulation of Liquid Water

    Agarwal, Animesh

    2015-01-01

    Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however computationally this technique is very demanding. The abovementioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One possible solution to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this ...

  18. Design of a Stability Augmentation System for an Unmanned Helicopter Based on Adaptive Control Techniques

    Shouzhao Sheng

    2015-09-01

    Full Text Available The task of control of unmanned helicopters is rather complicated in the presence of parametric uncertainties and measurement noises. This paper presents an adaptive model feedback control algorithm for an unmanned helicopter stability augmentation system. The proposed algorithm can achieve a guaranteed model reference tracking performance and speed up the convergence rates of adjustable parameters, even when the plant parameters vary rapidly. Moreover, the model feedback strategy in the algorithm further contributes to the improvement in the control quality of the stability augmentation system in the case of low signal to noise ratios, mainly because the model feedback path is noise free. The effectiveness and superiority of the proposed algorithm are demonstrated through a series of tests.

  19. CLUSTERING BASED ADAPTIVE IMAGE COMPRESSION SCHEME USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    M.Mohamed Ismail,

    2010-10-01

    Full Text Available This paper presents an image compression scheme with particle swarm optimization technique for clustering. The PSO technique is a powerful general purpose optimization technique that uses the concept of fitness.It provides a mechanism such that individuals in the swarm communicate and exchange information which is similar to the social behaviour of insects & human beings. Because of the mimicking the social sharing of information ,PSO directs particle to search the solution more efficiently.PSO is like a GA in that the population isinitialized with random potential solutions.The adjustment towards the best individual experience (PBEST and the best social experience (GBEST.Is conceptually similar to the cross over operaton of the GA.However it is unlike a GA in that each potential solution , called a particle is flying through the solution space with a velocity.Moreover the particles and the swarm have memory,which does not exist in the populatiom of GA.This optimization technique is used in Image compression and better results have obtained in terms of PSNR, CR and the visual quality of the image when compared to other existing methods.

  20. Problems on holographic imaging technique and adapt lasers for bubble chambers

    Different types of holographic recording technique for bubble chambers are presented and compared. The influence of turbulence on resolution is discussed as well as the demand on laser equipment. Experiments on a test model of HOLEBC using a pulsed ruby laser are also presented. (orig.)

  1. Problems on holographic imaging technique and adapt lasers for bubble chambers

    Bjelkhagen, H I

    1982-01-01

    Different types of holographic recording technique for bubble chambers are presented and compared. The influence of turbulence on resolution is discussed as well as the demand on laser equipment. Experiments on a test model of HOLEBC using a pulsed ruby laser are also presented.

  2. Applying a nonlinear, pitch-catch, ultrasonic technique for the detection of kissing bonds in friction stir welds.

    Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen

    2016-05-01

    Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw. PMID:26921559

  3. Multivariate class modeling techniques applied to multielement analysis for the verification of the geographical origin of chili pepper.

    Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio

    2016-09-01

    Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%). PMID:27041319

  4. 2D and 3D optical diagnostic techniques applied to Madonna dei Fusi by Leonardo da Vinci

    Fontana, R.; Gambino, M. C.; Greco, M.; Marras, L.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.; Sanapo, C.

    2005-06-01

    3D measurement and modelling have been traditionally applied to statues, buildings, archeological sites or similar large structures, but rarely to paintings. Recently, however, 3D measurements have been performed successfully also on easel paintings, allowing to detect and document the painting's surface. We used 3D models to integrate the results of various 2D imaging techniques on a common reference frame. These applications show how the 3D shape information, complemented with 2D colour maps as well as with other types of sensory data, provide the most interesting information. The 3D data acquisition was carried out by means of two devices: a high-resolution laser micro-profilometer, composed of a commercial distance meter mounted on a scanning device, and a laser-line scanner. The 2D data acquisitions were carried out using a scanning device for simultaneous RGB colour imaging and IR reflectography, and a UV fluorescence multispectral image acquisition system. We present here the results of the techniques described, applied to the analysis of an important painting of the Italian Reinassance: `Madonna dei Fusi', attributed to Leonardo da Vinci.

  5. Adaptive Scheduling Applied to Non-Deterministic Networks of Heterogeneous Tasks for Peak Throughput in Concurrent Gaudi

    AUTHOR|(CDS)2070032; Clemencic, Marco

    As much the e-Science revolutionizes the scientific method in its empirical research and scientific theory, as it does pose the ever growing challenge of accelerating data deluge. The high energy physics (HEP) is a prominent representative of the data intensive science and requires scalable high-throughput software to be able to cope with associated computational endeavors. One such striking example is $\\text G\\rm \\small{AUDI}$ -- an experiment independent software framework, used in several frontier HEP experiments. Among them stand ATLAS and LHCb -- two of four mainstream experiments at the Large Hadron Collider (LHC) at CERN, the European Laboratory for Particle Physics. The framework is currently undergoing an architectural revolution aiming at massively concurrent and adaptive data processing. In this work I explore new dimensions of performance improvement for the next generation $\\text G\\rm \\small{AUDI}$. I then propose a complex of generic task scheduling solutions for adaptive and non-intrusive throu...

  6. A region growing technique adapted to precise micro calcification characterization in mammography

    To-day, mammography is the only breast screening technique capable of detecting breast cancer at a very early stage. The presence of a breast tumor is indicated by some features on the mammogram. One sign of malignancy is the presence of clusters of fine, granular micro-calcifications. We present here a new three-step method for detecting and characterizing these micro-calcifications. We design with the detection of potential candidates ('seeds'). The aim of this first step is to detect all the pixels be a micro-calcification. Then we focus on our specific region growing technique which provides an accurate extraction of the shape of the region corresponding to each detected seed. This second step is essential because micro-calcifications shape is a very important feature for the diagnosis. It is then possible to determine precise parameters to characterize these micro-calcifications. (authors)

  7. Adaptive Techniques for Minimizing Middleware Memory Footprint for Distributed, Real-Time, Embedded Systems

    Panahi, Mark; Harmon, Trevor; Klefstad, Raymond

    2003-01-01

    In order for middleware to be widely useful for distributed, real-time, and embedded systems, it should provide a full set of services and be easily customizable to meet the memory footprint limitations of embedded systems. In this paper, we examine a variety of techniques used to reduce memory footprint in middleware. We found that combining aspect-oriented programming with code shrinkers and obfuscators reduces the memory footprint of CORBA middleware to

  8. Adaptive proactive reconfiguration: a technique for process variability and aging aware SRAM cache design

    Pouyan, Peyman; Amat Bertran, Esteve; Rubio Sola, Jose Antonio

    2014-01-01

    Nanoscale circuits are subject to a wide range of new limiting phenomena making essential to investigate new design strategies at the circuit and architecture level to improve its performance and reliability. Proactive reconfiguration is an emerging technique oriented to extend the system lifetime of memories affected by aging. In this brief, we present a new approach for static random access memory (SRAM) design that extends the cache lifetime when considering process variation and aging in ...

  9. An Adaptive Single-Well Stochastic Resonance Algorithm Applied to Trace Analysis of Clenbuterol in Human Urine

    Shaofei Xie; Bingren Xiang; Suyun Xiang; Wei Wang

    2012-01-01

    Based on the theory of stochastic resonance, an adaptive single-well stochastic resonance (ASSR) coupled with genetic algorithm was developed to enhance the signal-to-noise ratio of weak chromatographic signals. In conventional stochastic resonance algorithm, there are two or more parameters needed to be optimized and the proper parameters values were obtained by a universal searching within a given range. In the developed ASSR, the optimization of system parameter was simplified and automati...

  10. A Method to Select an Instrument for Measurement of HR-QOL for Cross-Cultural Adaptation Applied to Dermatology

    Adolfo Ga de Tiedra; Joan Mercadal; Xavier Badia; Jose Ma Mascaro; Rafael Lozano

    1998-01-01

    Objective: The objective of this study was to develop a process to obtain an instrument to measure dermatology specific health-related quality of life (HR-QOL), and to adapt it into another culture, namely the Spanish-speaking community. Design and Setting: By consensus, a multi-disciplinary team determined the qualities of an `ideal' questionnaire as follows: need (absence of any such instrument), utility, multi-dimensionality, psychometric development, simplicity, high degree of standardisa...

  11. Applied research on air pollution using nuclear-related analytical techniques. Report on the second research co-ordination meeting

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which started in 1992, and is scheduled to run until early 1997. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XRF, and PIXE for the analysis of toxic and other trace elements in air particulate matter. The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for research and monitoring studies on air pollution, ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural area). This document reports the discussions held during the second Research Co-ordination Meeting (RCM) for the CRP which took place at ANSTO in Menai, Australia. (author)

  12. Develop techniques for ion implantation of PLZT [lead-lanthanum-zirconate-titanate] for adaptive optics

    Research was conducted at Pacific Northwest Laboratory to develop high photosensitivity adaptive optical elements utilizing ion implanted lanthanum-doped lead-zirconate-titanate (PLZT). One centimeter square samples were prepared by implanting ferroelectric and anti-ferroelectric PLZT with a variety of species or combinations of species. These included Ne, O, Ni, Ne/Cr, Ne/Al, Ne/Ni, Ne/O, and Ni/O, at a variety of energies and fluences. An indium-tin oxide (ITO) electrode coating was designed to give a balance of high conductivity and optical transmission at near uv to near ir wavelengths. Samples were characterized for photosensitivity; implanted layer thickness, index of refraction, and density; electrode (ITO) conductivity; and in some cases, residual stress curvature. Thin film anti-ferroelectric PLZT was deposited in a preliminary experiment. The structure was amorphous with x-ray diffraction showing the beginnings of a structure at substrate temperatures of approximately 5500C. This report summarizes the research and provides a sampling of the data taken during the report period

  13. A framework for automated contour quality assurance in radiation therapy including adaptive techniques

    Contouring of targets and normal tissues is one of the largest sources of variability in radiation therapy treatment plans. Contours thus require a time intensive and error-prone quality assurance (QA) evaluation, limitations which also impair the facilitation of adaptive radiotherapy (ART). Here, an automated system for contour QA is developed using historical data (the ‘knowledge base’). A pilot study was performed with a knowledge base derived from 9 contours each from 29 head-and-neck treatment plans. Size, shape, relative position, and other clinically-relevant metrics and heuristically derived rules are determined. Metrics are extracted from input patient data and compared against rules determined from the knowledge base; a computer-learning component allows metrics to evolve with more input data, including patient specific data for ART. Nine additional plans containing 42 unique contouring errors were analyzed. 40/42 errors were detected as were 9 false positives. The results of this study imply knowledge-based contour QA could potentially enhance the safety and effectiveness of RT treatment plans as well as increase the efficiency of the treatment planning process, reducing labor and the cost of therapy for patients. (paper)

  14. A framework for automated contour quality assurance in radiation therapy including adaptive techniques

    Altman, M. B.; Kavanaugh, J. A.; Wooten, H. O.; Green, O. L.; DeWees, T. A.; Gay, H.; Thorstad, W. L.; Li, H.; Mutic, S.

    2015-07-01

    Contouring of targets and normal tissues is one of the largest sources of variability in radiation therapy treatment plans. Contours thus require a time intensive and error-prone quality assurance (QA) evaluation, limitations which also impair the facilitation of adaptive radiotherapy (ART). Here, an automated system for contour QA is developed using historical data (the ‘knowledge base’). A pilot study was performed with a knowledge base derived from 9 contours each from 29 head-and-neck treatment plans. Size, shape, relative position, and other clinically-relevant metrics and heuristically derived rules are determined. Metrics are extracted from input patient data and compared against rules determined from the knowledge base; a computer-learning component allows metrics to evolve with more input data, including patient specific data for ART. Nine additional plans containing 42 unique contouring errors were analyzed. 40/42 errors were detected as were 9 false positives. The results of this study imply knowledge-based contour QA could potentially enhance the safety and effectiveness of RT treatment plans as well as increase the efficiency of the treatment planning process, reducing labor and the cost of therapy for patients.

  15. An improved adaptive kriging-based importance technique for sampling multiple failure regions of low probability

    The estimation of system failure probabilities may be a difficult task when the values involved are very small, so that sampling-based Monte Carlo methods may become computationally impractical, especially if the computer codes used to model the system response require large computational efforts, both in terms of time and memory. This paper proposes a modification of an algorithm proposed in literature for the efficient estimation of small failure probabilities, which combines FORM to an adaptive kriging-based importance sampling strategy (AK-IS). The modification allows overcoming an important limitation of the original AK-IS in that it provides the algorithm with the flexibility to deal with multiple failure regions characterized by complex, non-linear limit states. The modified algorithm is shown to offer satisfactory results with reference to four case studies of literature, outperforming in general several other alternative methods of literature. - Highlights: • We tackle low failure probability estimation within reliability analysis context. • We improve a kriging-based importance sampling for estimating failure probabilities. • The new algorithm is capable of dealing with multiple-disconnected failure regions. • The performances are better than other methods of literature on 4 test case-studies

  16. A Novel Grid Impedance Estimation Technique based on Adaptive Virtual Resistance Control Loop Applied to Distributed Generation Inverters

    Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem; Guerrero, Josep M.

    control and to take the decision of either keep the DG connected, or disconnect it from the utility grid. The proposed method is based on a fast and easy grid fault detection method. A virtual damping resistance is used to drive the system to the resonance in order to extract the grid impedance parameters...

  17. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images. PMID:26890880

  18. Nuclear power plant status diagnostics using simulated condensation: An auto-adaptive computer learning technique

    The application of artificial neural network concepts to engineering analysis involves training networks, and therefore computers, to perform pattern classification or function mapping tasks. This training process requires the near optimization of network inter-neural connections. A new method for the stochastic optimization of these interconnections is presented in this dissertation. The new approach, called simulated condensation, is applied to networks of generalized, fully interconnected, continuous preceptrons. Simulated condensation optimizes the nodal bias, gain, and output activation constants as well as the usual interconnection weights. In this work, the simulated condensation network paradigm is applied to nuclear power plant operating status recognition. A set of standard problems such as the exclusive-or problem and others are also analyzed as benchmarks for the new methodology. The objective of the nuclear power plant accidient condition diagnosis effort is to train a network to identify both safe and potentially unsafe power plant conditions based on real time plant data. The data is obtained from computer generated accident scenarios. A simulated condensation network is trained to recognize seven nuclear power plant accident conditions as well as the normal full power operating condition. These accidents include, hot and cold leg loss of coolant, control rod ejection and steam generator tube leak accidents. Twenty-seven plant process variables are used as input to the neural network. Results show the feasibility of using simulated condensation as a method for diagnosing nuclear power plant conditions. The method is general and can easily be applied to other types of plants and plant processes

  19. Analysis of Adaptive Fuzzy Technique for Multiple Crack Diagnosis of Faulty Beam Using Vibration Signatures

    Amiya Kumar Dash

    2013-01-01

    Full Text Available This paper discusses the multicrack detection of structure using fuzzy Gaussian technique. The vibration parameters derived from the numerical methods of the cracked cantilever beam are used to set several fuzzy rules for designing the fuzzy controller used to predict the crack location and depth. Relative crack locations and relative crack depths are the output parameters from the fuzzy inference system. The method proposed in the current analysis is used to evaluate the dynamic response of cracked cantilever beam. The results of the proposed method are in good agreement with the results obtained from the developed experimental setup.

  20. Conception et adaptation de services techniques pour l'informatique ubiquitaire et nomade

    Lecomte, Sylvain

    2005-01-01

    Depuis la fin des années 1990, le développement des terminaux nomades et des réseaux sans fil s'est considérablement accéléré. Cela a provoqué l'apparition de nouvelles applications, très largement réparties, et offrant de nouveaux services, aussi bien aux usagers (applications de commerce électronique, télévision interactive, applications de proximité), qu'aux entreprises (développement du commerce B2B). Avec l'apparition de ces nouvelles applications, les services techniques, qui prennent e...