WorldWideScience

Sample records for adaptive techniques applied

  1. Applying perceptual and adaptive learning techniques for teaching introductory histopathology

    Directory of Open Access Journals (Sweden)

    Sally Krasne

    2013-01-01

    Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

  2. Wavelet-based Adaptive Techniques Applied to Turbulent Hypersonic Scramjet Intake Flows

    CERN Document Server

    Frauholz, Sarah; Reinartz, Birgit U; Müller, Siegfried; Behr, Marek

    2013-01-01

    The simulation of hypersonic flows is computationally demanding due to large gradients of the flow variables caused by strong shock waves and thick boundary or shear layers. The resolution of those gradients imposes the use of extremely small cells in the respective regions. Taking turbulence into account intensives the variation in scales even more. Furthermore, hypersonic flows have been shown to be extremely grid sensitive. For the simulation of three-dimensional configurations of engineering applications, this results in a huge amount of cells and prohibitive computational time. Therefore, modern adaptive techniques can provide a gain with respect to computational costs and accuracy, allowing the generation of locally highly resolved flow regions where they are needed and retaining an otherwise smooth distribution. An h-adaptive technique based on wavelets is employed for the solution of hypersonic flows. The compressible Reynolds averaged Navier-Stokes equations are solved using a differential Reynolds s...

  3. Acceptance and Mindfulness Techniques as Applied to Refugee and Ethnic Minority Populations with PTSD: Examples from "Culturally Adapted CBT"

    Science.gov (United States)

    Hinton, Devon E.; Pich, Vuth; Hofmann, Stefan G.; Otto, Michael W.

    2013-01-01

    In this article we illustrate how we utilize acceptance and mindfulness techniques in our treatment (Culturally Adapted CBT, or CA-CBT) for traumatized refugees and ethnic minority populations. We present a Nodal Network Model (NNM) of Affect to explain the treatment's emphasis on body-centered mindfulness techniques and its focus on psychological…

  4. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  5. Applied ALARA techniques

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, L.O.

    1998-02-05

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work.

  6. Techniques of English Textbooks Adaptation

    Institute of Scientific and Technical Information of China (English)

    张婧雯; 杨竞欧

    2014-01-01

    This essay attempts to aim English teachers to evaluate and adapt the current English textbooks.According to different levels and majors of the students,English teachers can enhance the teaching materials and their teaching skills.This paper would provide several useful techniques for teachers to make evaluations and adaptations of using teaching materials.

  7. Adaptive Control Using Residual Mode Filters Applied to Wind Turbines

    Science.gov (United States)

    Frost, Susan A.; Balas, Mark J.

    2011-01-01

    Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.

  8. Adaptive Educational Software by Applying Reinforcement Learning

    Science.gov (United States)

    Bennane, Abdellah

    2013-01-01

    The introduction of the intelligence in teaching software is the object of this paper. In software elaboration process, one uses some learning techniques in order to adapt the teaching software to characteristics of student. Generally, one uses the artificial intelligence techniques like reinforcement learning, Bayesian network in order to adapt…

  9. Adaptive multiresolution computations applied to detonations

    CERN Document Server

    Roussel, Olivier

    2015-01-01

    A space-time adaptive method is presented for the reactive Euler equations describing chemically reacting gas flow where a two species model is used for the chemistry. The governing equations are discretized with a finite volume method and dynamic space adaptivity is introduced using multiresolution analysis. A time splitting method of Strang is applied to be able to consider stiff problems while keeping the method explicit. For time adaptivity an improved Runge--Kutta--Fehlberg scheme is used. Applications deal with detonation problems in one and two space dimensions. A comparison of the adaptive scheme with reference computations on a regular grid allow to assess the accuracy and the computational efficiency, in terms of CPU time and memory requirements.

  10. A REVIEW OF ADAPTIVE AUTORECLOSURE TECHNIQUES

    Directory of Open Access Journals (Sweden)

    PHILIP YAW OKYERE

    2010-10-01

    Full Text Available Adaptive autoreclosing is a fast emerging technology for improving power system marginal stability during faults. It avoids reclosing untopermanent faults and recloses unto transient faults only after the secondary arc has extinguished. This paper presents a comprehensive review of various adaptive autoreclosure techniques. It aims at providing a broad perspective on adaptive autoreclosing techniques to researchers and application engineers.

  11. Development of applied optical techniques

    International Nuclear Information System (INIS)

    This report resents the status of researches on the applications of lasers at KAERI. A compact portable laser fluorometer detecting uranium desolved in aqueous solution was built. The laser-induced fluorescence of uranium was detected with a photomultiplier tube. A delayed gate circuit and an integrating circuit were used to process the electrical signal. A small nitrogen laser was used to excite uranium. The detecting limit is about 0.1 ppb. The effect of various acidic solutions was investigated. Standard addition technique was incorporated to improve the measuring accuracy. This instrument can be used for safety inspection of workers in the nuclear fuel cycle facilities. (Author)

  12. Adaptive Educational Software by Applying Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Abdellah BENNANE

    2013-04-01

    Full Text Available The introduction of the intelligence in teaching software is the object of this paper. In software elaboration process, one uses some learning techniques in order to adapt the teaching software to characteristics of student. Generally, one uses the artificial intelligence techniques like reinforcement learning, Bayesian network in order to adapt the system to the environment internal and external conditions, and allow this system to interact efficiently with its potentials user. The intention is to automate and manage the pedagogical process of tutoring system, in particular the selection of the content and manner of pedagogic situations. Researchers create a pedagogic learning agent that simplifies the manual logic and supports progress and the management of the teaching process (tutor-learner through natural interactions.

  13. Development of applied optical techniques

    International Nuclear Information System (INIS)

    The objective of this project is to improve laser application techniques in nuclear industry. A small,light and portable laser induced fluorometer was developed. It was designed to compensate inner filter and quenching effects by on-line data processing during analysis of uranium in aqueous solution. Computer interface improves the accuracy and data processing capabilities of the instrument. Its detection limit is as low as 0.1 ppb of uranium. It is ready to use in routine chemical analysis. The feasible applications such as for uranium level monitoring in discards from reconversion plant or fuel fabrication plant were seriously considered with minor modification of the instrument. It will be used to study trace analysis of rare-earth elements. The IRMPD of CHF3 was carried out and the effects of buffer gases such as Ar,N2 and SF6 were investigated. The IRMPD rate was increased with increasing pressure of the reactant and buffer gases. The pressure effect of the reactant CHF3 below 0.1 Torr showed opposite results. It was considered that the competition between quenching effect and rotational hole-filling effect during intermolecular collisions plays a great role in this low pressure region. The applications of holography in nuclear fuel cycle facilities were surveyed and analyzed. Also, experimental apparatuses such as an Ar ion laser, various kinds of holographic films and several optical components were prepared. (Author)

  14. A novel online adaptive time delay identification technique

    Science.gov (United States)

    Bayrak, Alper; Tatlicioglu, Enver

    2016-05-01

    Time delay is a phenomenon which is common in signal processing, communication, control applications, etc. The special feature of time delay that makes it attractive is that it is a commonly faced problem in many systems. A literature search on time-delay identification highlights the fact that most studies focused on numerical solutions. In this study, a novel online adaptive time-delay identification technique is proposed. This technique is based on an adaptive update law through a minimum-maximum strategy which is firstly applied to time-delay identification. In the design of the adaptive identification law, Lyapunov-based stability analysis techniques are utilised. Several numerical simulations were conducted with Matlab/Simulink to evaluate the performance of the proposed technique. It is numerically demonstrated that the proposed technique works efficiently in identifying both constant and disturbed time delays, and is also robust to measurement noise.

  15. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    ), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...... microgrid planning approaches are pointed out....

  16. Model Driven Mutation Applied to Adaptative Systems Testing

    CERN Document Server

    Bartel, Alexandre; Munoz, Freddy; Klein, Jacques; Mouelhi, Tejeddine; Traon, Yves Le

    2012-01-01

    Dynamically Adaptive Systems modify their behav- ior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical sys- tems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correct- ness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary resul...

  17. Adaptive interference techniques for mobile antennas

    Science.gov (United States)

    Griffiths, Lloyd J.; Satorius, E.

    1988-05-01

    The results of a study performed to investigate effective, low cost adaptive signal processing techniques for suppressing mutual satellite interference that can arise in a mobile satellite (MSAT) communication system are discussed. The study focused on the use of adaptive sidelobe cancelling as a method to overcome undesired interference caused by a multiplicity of satellite transmissions within the field of view of the ground station. Results are presented which show that the conventional sidelobe canceller produces undesired reduction of the useful signal. This effect is due to the presence of the useful component in the reference antenna element. An alternative structure, the generalized sidelobe canceller (GSC), has been proposed to overcome this difficulty. A preliminary investigation of possible implementations of the GSC was conducted. It was found that at most 8 bits would be required to implement the GSC processor under conditions in which the desired signal-to-interference ratio is 25 dB.

  18. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...... determined from central composite designs. In a two phase algorithm the second order surface is adjusted to the domain of the most likely failure point and both FORM and SORM estimates are obtained. The algorithm is implemented as a safeguard algorithm so non-converged solutions are avoided. Furthermore, a...

  19. Applied techniques for mining natural proteasome inhibitors.

    Science.gov (United States)

    Stein, Martin L; Groll, Michael

    2014-01-01

    In eukaryotic cells, the ubiquitin-proteasome-system (UPS) is responsible for the non-lysosomal degradation of proteins and plays a pivotal role in such vital processes as protein homeostasis, antigen processing or cell proliferation. Therefore, it is an attractive drug target with various applications in cancer and immunosuppressive therapies. Being an evolutionary well conserved pathway, many pathogenic bacteria have developed small molecules, which modulate the activity of their hosts' UPS components. Such natural products are, due to their stepwise optimization over the millennia, highly potent in terms of their binding mechanisms, their bioavailability and selectivity. Generally, this makes bioactive natural products an ideal starting point for the development of novel drugs. Since four out of the ten best seller drugs are natural product derivatives, research in this field is still of unfathomable value for the pharmaceutical industry. The currently most prominent example for the successful exploitation of a natural compound in the UPS field is carfilzomib (Kyprolis®), which represents the second FDA approved drug targeting the proteasome after the admission of the blockbuster bortezomib (Velcade®) in 2003. On the other hand side of the spectrum, ONX 0914, which is derived from the same natural product as carfilzomib, has been shown to selectively inhibit the immune response related branch of the pathway. To date, there exists a huge potential of UPS inhibitors with regard to many diseases. Both approved drugs against the proteasome show severe side effects, adaptive resistances and limited applicability, thus the development of novel compounds with enhanced properties is a main objective of active research. In this review, we describe the techniques, which can be utilized for the discovery of novel natural inhibitors, which in particular block the 20S proteasomal activity. In addition, we will illustrate the successful implementation of a recently

  20. Applying Mixed Methods Techniques in Strategic Planning

    Science.gov (United States)

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  1. Digital Speckle Technique Applied to Flow Visualization

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Digital speckle technique uses a laser, a CCD camera, and digital processing to generate interference fringes at the television framing rate. Its most obvious advantage is that neither darkroom facilities nor photographic wet chemical processing is required. In addition, it can be used in harsh engineering environments. This paper discusses the strengths and weaknesses of three digital speckle methodologies. (1) Digital speckle pattern interferometry (DSPI) uses an optical polarization phase shifter for visualization and measurement of the density field in a flow field. (2) Digital shearing speckle interferometry (DSSI) utilizes speckle-shearing interferometry in addition to optical polarization phase shifting. (3) Digital speckle photography (DSP) with computer reconstruction. The discussion describes the concepts, the principles and the experimental arrangements with some experimental results. The investigation shows that these three digital speckle techniques provide an excellent method for visualizing flow fields and for measuring density distributions in fluid mechanics and thermal flows.

  2. Applying Cooperative Techniques in Teaching Problem Solving

    Directory of Open Access Journals (Sweden)

    Krisztina Barczi

    2013-12-01

    Full Text Available Teaching how to solve problems – from solving simple equations to solving difficult competition tasks – has been one of the greatest challenges for mathematics education for many years. Trying to find an effective method is an important educational task. Among others, the question arises as to whether a method in which students help each other might be useful. The present article describes part of an experiment that was designed to determine the effects of cooperative teaching techniques on the development of problem-solving skills.

  3. Basic principles of applied nuclear techniques

    International Nuclear Information System (INIS)

    The technological applications of radioactive isotopes and radiation in South Africa have grown steadily since the first consignment of man-made radioisotopes reached this country in 1948. By the end of 1975 there were 412 authorised non-medical organisations (327 industries) using hundreds of sealed sources as well as their fair share of the thousands of radioisotope consignments, annually either imported or produced locally (mainly for medical purposes). Consequently, it is necessary for South African technologists to understand the principles of radioactivity in order to appreciate the industrial applications of nuclear techniques

  4. Nuclear analytical techniques applied to forensic chemistry

    International Nuclear Information System (INIS)

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author)

  5. Applying Adapted Big Five Teamwork Theory to Agile Software Development

    OpenAIRE

    Strode, Diane

    2016-01-01

    Teamwork is a central tenet of agile software development and various teamwork theories partially explain teamwork in that context. Big Five teamwork theory is one of the most influential teamwork theories, but prior research shows that the team leadership concept in this theory it is not applicable to agile software development. This paper applies an adapted form of Big Five teamwork theory to cases of agile software development. Three independent cases were drawn from a single organisation....

  6. Study on reverse-quadtree adaptive grid technique

    Directory of Open Access Journals (Sweden)

    Huang Haiming

    2012-01-01

    Full Text Available The fast multipole method is universally adopted for solving the convection equation in the vortex method. In this paper, a reverse-quadtree adaptive grid technique is proposed in order to improve the quadtree adaptive grid technique in the fast multipole method. Taking flow past a cylinder as an example, the results indicate the reverse-quadtree scheme can save more calculation time than the quadtree scheme when the particle population is large enough.

  7. Adaptive Robotic Systems Design in University of Applied Sciences

    Directory of Open Access Journals (Sweden)

    Gunsing Jos

    2016-01-01

    Full Text Available In the industry for highly specialized machine building (small series with high variety and high complexity and in healthcare a demand for adaptive robotics is rapidly coming up. Technically skilled people are not always available in sufficient numbers. A lot of know how with respect to the required technologies is available but successful adaptive robotic system designs are still rare. In our research at the university of applied sciences we incorporate new available technologies in our education courses by way of research projects; in these projects students will investigate the application possibilities of new technologies together with companies and teachers. Thus we are able to transfer knowledge to the students including an innovation oriented attitude and skills. Last years we developed several industrial binpicking applications for logistics and machining-factories with different types of 3D vision. Also force feedback gripping has been developed including slip sensing. Especially for healthcare robotics we developed a so-called twisted wire actuator, which is very compact in combination with an underactuated gripper, manufactured in one piece in polyurethane. We work both on modeling and testing the functions of these designs but we work also on complete demonstrator systems. Since the amount of disciplines involved in complex product and machine design increases rapidly we pay a lot of attention with respect to systems engineering methods. Apart from the classical engineering disciplines like mechanical, electrical, software and mechatronics engineering, especially for adaptive robotics more and more disciplines like industrial product design, communication … multimedia design and of course physics and even art are to be involved depending on the specific application to be designed. Design tools like V-model, agile/scrum and design-approaches to obtain the best set of requirements are being implemented in the engineering studies from

  8. Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    Science.gov (United States)

    Li, Lihua; Coon, Michael; McLinden, Matthew

    2013-01-01

    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression

  9. Volcanic Monitoring Techniques Applied to Controlled Fragmentation Experiments

    Science.gov (United States)

    Kueppers, U.; Alatorre-Ibarguengoitia, M. A.; Hort, M. K.; Kremers, S.; Meier, K.; Scharff, L.; Scheu, B.; Taddeucci, J.; Dingwell, D. B.

    2010-12-01

    ejection and that the evaluated results were mostly in good agreement. We will discuss the technical difficulties encountered, e.g. the temporal synchronisation of the different techniques. Furthermore, the internal data management of the DR prevents at present a continuous recording and only a limited number of snapshots is stored. Nonetheless, in at least three experiments the onset of particle ejection was measured by all different techniques and gave coherent results of up to 100 m/s. This is a very encouraging result and of paramount importance as it proofs the applicability of these independent methods to volcano monitoring. Each method by itself may enhance our understanding of the pressurisation state of a volcano, an essential factor in ballistic hazard evaluation and eruption energy estimation. Technical adaptations of the DR will overcome the encountered problems and allow a more refined data analysis during the next campaign.

  10. Applying DEA Technique to Library Evaluation in Academic Research Libraries.

    Science.gov (United States)

    Shim, Wonsik

    2003-01-01

    This study applied an analytical technique called Data Envelopment Analysis (DEA) to calculate the relative technical efficiency of 95 academic research libraries, all members of the Association of Research Libraries. DEA, with the proper model of library inputs and outputs, can reveal best practices in the peer groups, as well as the technical…

  11. An adaptive envelope spectrum technique for bearing fault detection

    International Nuclear Information System (INIS)

    In this work, an adaptive envelope spectrum (AES) technique is proposed for bearing fault detection, especially for analyzing signals with transient events. The proposed AES technique first modulates the signal using the empirical mode decomposition to formulate the representative intrinsic mode functions (IMF), and then a novel IMF reconstruction method is proposed based on a correlation analysis of the envelope spectra. The reconstructed signal is post-processed by using an adaptive filter to enhance impulsive signatures, where the filter length is optimized by the proposed sparsity analysis technique. Bearing health conditions are diagnosed by examining bearing characteristic frequency information on the envelope power spectrum. The effectiveness of the proposed fault detection technique is verified by a series of experimental tests corresponding to different bearing conditions. (paper)

  12. OFFLINE HANDWRITTEN SIGNATURE IDENTIFICATION USING ADAPTIVE WINDOW POSITIONING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Ghazali Sulong

    2014-10-01

    Full Text Available The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning technique which focuses on not just the meaning of the handwritten signature but also on the individuality of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn (13x13. This size should be large enough to contain ample information about the style of the author and small enough to ensure a good identification performance. The process was tested with a GPDS datasetcontaining 4870 signature samples from 90 different writers by comparing the robust features of the test signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that adaptive window positioning technique proved to be the efficient and reliable method for accurate signature feature extraction for the identification of offline handwritten signatures .The contribution of this technique can be used to detect signatures signed under emotional duress.

  13. The resilience approach to climate adaptation applied for flood risk

    NARCIS (Netherlands)

    Gersonius, B.

    2012-01-01

    This dissertation presents a potential way forward for adaptation to climate change, termed the resilience approach. This approach takes a dynamic perspective on adaptive processes and the effects of these processes at/across different spatio-temporal scales. Experience is provided with four methods

  14. Sensor Data Qualification Technique Applied to Gas Turbine Engines

    Science.gov (United States)

    Csank, Jeffrey T.; Simon, Donald L.

    2013-01-01

    This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.

  15. Application of Adaptive Threading Technique to Hot Strip Mill

    Institute of Scientific and Technical Information of China (English)

    DING Jing-guo; HU Xian-lei; JIAO Jing-min; SHE Guang-fu; LIU Xiang-hua

    2008-01-01

    Thickness deviation of hot strip rolling needs to be strictly controlled in the computer system.An adaptive threading technique was researched,in which the measured data from threaded stands were used to predict thickness and material hardness errors,to modify the setup for the remaining unthreaded stands.After the adaptive threading model was used online on the hot strip mill of the Panzhihua Iron and Steel Group Co Ltd,the thickness deviation was decreased obviously.The hit rate of thickness control of different steel grades increases.

  16. An Adaptive Channel Estimation Technique in MIMO OFDM Systems

    Institute of Scientific and Technical Information of China (English)

    Pei-Sheng Pan; Bao-Yu Zheng

    2008-01-01

    In this paper, an adaptive channel estimation for MIMO OFDM is proposed. A set of pilot tones first are placed in each OFDM block, then the channel frequency response of these pilot tones are adaptively estimated by reeursive least squares (RLS) directly in frequency domain not in time domain. Then after the estimation of the channel frequency response of pilot tones, to obtain the channel frequency response of data tones, a new interpolation method based on DFT different from traditional linear interpolation method according to adjacent pilot tones is proposed. Simulation results show good performance of the technique.

  17. Technique applied in electrical power distribution for Satellite Launch Vehicle

    Directory of Open Access Journals (Sweden)

    João Maurício Rosário

    2010-09-01

    Full Text Available The Satellite Launch Vehicle electrical network, which is currently being developed in Brazil, is sub-divided for analysis in the following parts: Service Electrical Network, Controlling Electrical Network, Safety Electrical Network and Telemetry Electrical Network. During the pre-launching and launching phases, these electrical networks are associated electrically and mechanically to the structure of the vehicle. In order to succeed in the integration of these electrical networks it is necessary to employ techniques of electrical power distribution, which are proper to Launch Vehicle systems. This work presents the most important techniques to be considered in the characterization of the electrical power supply applied to Launch Vehicle systems. Such techniques are primarily designed to allow the electrical networks, when submitted to the single-phase fault to ground, to be able of keeping the power supply to the loads.

  18. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal

    2012-07-28

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.

  19. Adaptive spectral identification techniques in presence of undetected non linearities

    CERN Document Server

    Cella, G; Guidi, G M

    2002-01-01

    The standard procedure for detection of gravitational wave coalescing binaries signals is based on Wiener filtering with an appropriate bank of template filters. This is the optimal procedure in the hypothesis of addictive Gaussian and stationary noise. We study the possibility of improving the detection efficiency with a class of adaptive spectral identification techniques, analyzing their effect in presence of non stationarities and undetected non linearities in the noise

  20. Applying Stylometric Analysis Techniques to Counter Anonymity in Cyberspace

    Directory of Open Access Journals (Sweden)

    Jianwen Sun

    2012-02-01

    Full Text Available Due to the ubiquitous nature and anonymity abuses in cyberspace, it’s difficult to make criminal identity tracing in cybercrime investigation. Writeprint identification offers a valuable tool to counter anonymity by applying stylometric analysis technique to help identify individuals based on textual traces. In this study, a framework for online writeprint identification is proposed. Variable length character n-gram is used to represent the author’s writing style. The technique of IG seeded GA based feature selection for Ensemble (IGAE is also developed to build an identification model based on individual author level features. Several specific components for dealing with the individual feature set are integrated to improve the performance. The proposed feature and technique are evaluated on a real world data set encompassing reviews posted by 50 Amazon customers. The experimental results show the effectiveness of the proposed framework, with accuracy over 94% for 20 authors and over 80% for 50 ones. Compared with the baseline technique (Support Vector Machine, a higher performance is achieved by using IGAE, resulting in a 2% and 8% improvement over SVM for 20 and 50 authors respectively. Moreover, it has been shown that IGAE is more scalable in terms of the number of authors, than author group level based methods.

  1. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  2. Solution adaptive grids applied to low Reynolds number flow

    Science.gov (United States)

    de With, G.; Holdø, A. E.; Huld, T. A.

    2003-08-01

    A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.

  3. An Adaptive Watermarking Technique for Copyright Protection of Digital Images

    Energy Technology Data Exchange (ETDEWEB)

    Park, K.S.; Lee, B.Y.; Park, S.H. [Yonsei University, Seoul (Korea); Chung, T.Y. [Kangnung National University, Kangnung (Korea)

    2002-03-01

    This paper proposes a new watermark embedding and extraction technique which extends the direct sequence spread spectrum technique. The proposed technique approximates the complexity of image and block in spatial domain using Laplacian filtering and watermark is adaptively embeded in the mid-frequency DCT components. Local parity bits are attached to higher-frequency DCT components and they are used to detect extraction errors and correct those errors. In extraction process, the proposed method boosts the higher frequency components of image and extracts the watermark by demodulation and this information is verified and adjusted by parity bits. Experimental results show it is invisible and robust to several external attacks. (author). 7 refs., 5 figs., 2 tabs.

  4. Hyperspectral-imaging-based techniques applied to wheat kernels characterization

    Science.gov (United States)

    Serranti, Silvia; Cesare, Daniela; Bonifazi, Giuseppe

    2012-05-01

    Single kernels of durum wheat have been analyzed by hyperspectral imaging (HSI). Such an approach is based on the utilization of an integrated hardware and software architecture able to digitally capture and handle spectra as an image sequence, as they results along a pre-defined alignment on a surface sample properly energized. The study was addressed to investigate the possibility to apply HSI techniques for classification of different types of wheat kernels: vitreous, yellow berry and fusarium-damaged. Reflectance spectra of selected wheat kernels of the three typologies have been acquired by a laboratory device equipped with an HSI system working in near infrared field (1000-1700 nm). The hypercubes were analyzed applying principal component analysis (PCA) to reduce the high dimensionality of data and for selecting some effective wavelengths. Partial least squares discriminant analysis (PLS-DA) was applied for classification of the three wheat typologies. The study demonstrated that good classification results were obtained not only considering the entire investigated wavelength range, but also selecting only four optimal wavelengths (1104, 1384, 1454 and 1650 nm) out of 121. The developed procedures based on HSI can be utilized for quality control purposes or for the definition of innovative sorting logics of wheat.

  5. Active Learning Techniques Applied to an Interdisciplinary Mineral Resources Course.

    Science.gov (United States)

    Aird, H. M.

    2015-12-01

    An interdisciplinary active learning course was introduced at the University of Puget Sound entitled 'Mineral Resources and the Environment'. Various formative assessment and active learning techniques that have been effective in other courses were adapted and implemented to improve student learning, increase retention and broaden knowledge and understanding of course material. This was an elective course targeted towards upper-level undergraduate geology and environmental majors. The course provided an introduction to the mineral resources industry, discussing geological, environmental, societal and economic aspects, legislation and the processes involved in exploration, extraction, processing, reclamation/remediation and recycling of products. Lectures and associated weekly labs were linked in subject matter; relevant readings from the recent scientific literature were assigned and discussed in the second lecture of the week. Peer-based learning was facilitated through weekly reading assignments with peer-led discussions and through group research projects, in addition to in-class exercises such as debates. Writing and research skills were developed through student groups designing, carrying out and reporting on their own semester-long research projects around the lasting effects of the historical Ruston Smelter on the biology and water systems of Tacoma. The writing of their mini grant proposals and final project reports was carried out in stages to allow for feedback before the deadline. Speakers from industry were invited to share their specialist knowledge as guest lecturers, and students were encouraged to interact with them, with a view to employment opportunities. Formative assessment techniques included jigsaw exercises, gallery walks, placemat surveys, think pair share and take-home point summaries. Summative assessment included discussion leadership, exams, homeworks, group projects, in-class exercises, field trips, and pre-discussion reading exercises

  6. Computer Science Techniques Applied to Parallel Atomistic Simulation

    Science.gov (United States)

    Nakano, Aiichiro

    1998-03-01

    Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.

  7. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    Science.gov (United States)

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  8. Image analysis technique applied to lock-exchange gravity currents

    Science.gov (United States)

    Nogueira, Helena I. S.; Adduce, Claudia; Alves, Elsa; Franca, Mário J.

    2013-04-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in the image relating the amount of dye uniformly distributed in the tank and the greyscale values in the corresponding images. The results are evaluated and corrected by applying the mass conservation principle within the experimental tank. The procedure is a simple way to assess the time-varying density distribution within the gravity current, allowing the investigation of gravity current dynamics and mixing processes.

  9. Adaptive Techniques for Clustered N-Body Cosmological Simulations

    CERN Document Server

    Menon, Harshitha; Zheng, Gengbin; Jetley, Pritish; Kale, Laxmikant; Quinn, Thomas; Governato, Fabio

    2014-01-01

    ChaNGa is an N-body cosmology simulation application implemented using Charm++. In this paper, we present the parallel design of ChaNGa and address many challenges arising due to the high dynamic ranges of clustered datasets. We focus on optimizations based on adaptive techniques for scaling to more than 128K cores. We demonstrate strong scaling on up to 512K cores of Blue Waters evolving 12 and 24 billion particles. We also show strong scaling of highly clustered datasets on up to 128K cores.

  10. A New Local Adaptive Thresholding Technique in Binarization

    CERN Document Server

    Singh, T Romen; Singh, O Imocha; Sinam, Tejmani; Singh, Kh Manglem

    2012-01-01

    Image binarization is the process of separation of pixel values into two groups, white as background and black as foreground. Thresholding plays a major in binarization of images. Thresholding can be categorized into global thresholding and local thresholding. In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate. In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background. In such cases, binarization with local thresholding is more appropriate. This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation. Normally the local mean computational time depends on the window size. Our technique uses integral sum image as a prior processing to calculate local mean. It does not involve calculations of standard deviations as in other ...

  11. Adaptive Communication Techniques for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Peng Du

    2013-03-01

    Full Text Available The vision for the Internet of Things (IoT demands that material objects acquire communications and computation capabilities and become able to automatically identify themselves through standard protocols and open systems, using the Internet as their foundation. Yet, several challenges still must be addressed for this vision to become a reality. A core ingredient in such development is the ability of heterogeneous devices to communicate adaptively so as to make the best of limited spectrum availability and cope with competition which is inevitable as more and more objects connect to the system. This survey provides an overview of current developments in this area, placing emphasis on wireless sensor networks that can provide IoT capabilities for material objects and techniques that can be used in the context of systems employing low-power versions of the Internet Protocol (IP stack. The survey introduces a conceptual model that facilitates the identification of opportunities for adaptation in each layer of the network stack. After a detailed discussion of specific approaches applicable to particular layers, we consider how sharing information across layers can facilitate further adaptation. We conclude with a discussion of future research directions.

  12. Applying Utility Functions to Adaptation Planning for Home Automation Applications

    Science.gov (United States)

    Bratskas, Pyrros; Paspallis, Nearchos; Kakousis, Konstantinos; Papadopoulos, George A.

    A pervasive computing environment typically comprises multiple embedded devices that may interact together and with mobile users. These users are part of the environment, and they experience it through a variety of devices embedded in the environment. This perception involves technologies which may be heterogeneous, pervasive, and dynamic. Due to the highly dynamic properties of such environments, the software systems running on them have to face problems such as user mobility, service failures, or resource and goal changes which may happen in an unpredictable manner. To cope with these problems, such systems must be autonomous and self-managed. In this chapter we deal with a special kind of a ubiquitous environment, a smart home environment, and introduce a user-preference-based model for adaptation planning. The model, which dynamically forms a set of configuration plans for resources, reasons automatically and autonomously, based on utility functions, on which plan is likely to best achieve the user's goals with respect to resource availability and user needs.

  13. Analytical techniques applied to study cultural heritage objects

    Energy Technology Data Exchange (ETDEWEB)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N., E-mail: rizzutto@if.usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica

    2015-07-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  14. Dust tracking techniques applied to the STARDUST facility: First results

    Energy Technology Data Exchange (ETDEWEB)

    Malizia, A., E-mail: malizia@ing.uniroma2.it [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Camplani, M. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Gelfusa, M. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Lupelli, I. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); EURATOM/CCFE Association, Culham Science Centre, Abingdon (United Kingdom); Richetta, M.; Antonelli, L.; Conetta, F.; Scarpellini, D.; Carestia, M.; Peluso, E.; Bellecci, C. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Salgado, L. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Video Processing and Understanding Laboratory, Universidad Autónoma de Madrid (Spain); Gaudio, P. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy)

    2014-10-15

    Highlights: •Use of an experimental facility, STARDUST, to analyze the dust resuspension problem inside the tokamak in case of loss of vacuum accident. •PIV technique implementation to track the dust during a LOVA reproduction inside STARDUST. •Data imaging techniques to analyze dust velocity field: first results and data discussion. -- Abstract: An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values

  15. Conceptualizing urban adaptation to climate change: Findings from an applied adaptation assessment framework

    OpenAIRE

    Johnson, Katie; BREIL, MARGARETHA

    2012-01-01

    Urban areas have particular sensitivities to climate change, and therefore adaptation to a warming planet represents a challenging new issue for urban policy makers in both the developed and developing world. Further to climate mitigation strategies implemented in various cities over the past 20 years, more recent efforts of urban management have also included actions taken to adapt to increasing temperatures, sea level and extreme events. Through the examination and comparison of seven citie...

  16. Novel Adaptive Decision Threshold Modulation Technique for UWB Direct Chaotic Communications

    Directory of Open Access Journals (Sweden)

    Said Sadoudi

    2014-11-01

    Full Text Available A new non-coherent chaotic modulation technique based on adaptive decision threshold is proposed for the UltraWideBand (UWB Direct Chaotic Communication (DCC technology. The principal advantages of the proposed technique are: (1 Removing the threshold problem of the classical Chaotic On-Off Keying modulation technique which uses a nonzero decision threshold; (2 Providing a high throughput comparing to the others techniques since it do not uses any delay at the modulation; (3 Reducing the transmitted power, thanks to a transmitted bit energy devised by two. The obtained simulation results show high Bit Error Rate performances of the proposed technique applied in an UWB DCC system. In addition, the new chaotic modulation is more suitable in all DCC-based communications schemes.

  17. ESR dating technique applied to Pleistocene Corals (Barbados Island)

    International Nuclear Information System (INIS)

    In this work we applied the ESR (Electron Spin Resonance) dating technique to a coral coming from Barbados island. After a preliminary purification treatment, coral samples were milled and separated in different granulometry groups. Powder samples having granulometry values between 125 μm-250 μm and 250 μm-500 μm were irradiated at the Calliope60 Co radioisotope source (R.C. ENEA-Casaccia) at doses between 10-3300 Gγ and their radiation induced ESR signals were measured by a Bruker EMS1O4 spectrometer. The signal/noise ratio turned to be highest far the granulometry between 250 μm-500 μm and consequently the paleo-curve was constructed by using the ESR signals related to this granulometry value. The paleo-curve was fitted with the exponential growth function y = a - b · e-cx which well describes the behaviour of the curve also in the saturation region. Extrapolating the paleo-dose and knowing the annual dose (999±79 μGy/y) we calculated a coral age of 156±12 ky, which is in good agreement with results obtained on coral coming from the same region by other authors

  18. An adaptive dual-optimal path-planning technique for unmanned air vehicles

    Directory of Open Access Journals (Sweden)

    Whitfield Clifford A.

    2016-01-01

    Full Text Available A multi-objective technique for unmanned air vehicle path-planning generation through task allocation has been developed. The dual-optimal path-planning technique generates real-time adaptive flight paths based on available flight windows and environmental influenced objectives. The environmentally-influenced flight condition determines the aircraft optimal orientation within a downstream virtual window of possible vehicle destinations that is based on the vehicle’s kinematics. The intermittent results are then pursued by a dynamic optimization technique to determine the flight path. This path-planning technique is a multi-objective optimization procedure consisting of two goals that do not require additional information to combine the conflicting objectives into a single-objective. The technique was applied to solar-regenerative high altitude long endurance flight which can benefit significantly from an adaptive real-time path-planning technique. The objectives were to determine the minimum power required flight paths while maintaining maximum solar power for continual surveillance over an area of interest (AOI. The simulated path generation technique prolonged the flight duration over a sustained turn loiter flight path by approximately 2 months for a year of flight. The potential for prolonged solar powered flight was consistent for all latitude locations, including 2 months of available flight at 60° latitude, where sustained turn flight was no longer capable.

  19. Robust adaptive synchronization of chaotic neural networks by slide technique

    Institute of Scientific and Technical Information of China (English)

    Lou Xu-Yang; Cui Bao-Tong

    2008-01-01

    In this paper,we focus on the robust adaptive synchronization between two coupled chaotic neural networks with all the parameters unknown and time-varying delay.In order to increase the robustness of the two coupled neural networks,the key idea is that a sliding-mode-type controller is employed.Moreover,without the estimate values of the network unknown parameters taken as an updating object,a new updating object is introduced in the constructing of controller.Using the proposed controller,without any requirements for the boundedness,monotonicity and differentiability of activation functions,and symmetry of connections,the two coupled chaotic neural networks can achieve global robust synchronization no matter what their initial states are.Finally,the numerical simulation validates the effectiveness and feasibility of the proposed technique.

  20. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin;

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  1. Indirect techniques for adaptive input-output linearization of non-linear systems

    Science.gov (United States)

    Teel, Andrew; Kadiyala, Raja; Kokotovic, Peter; Sastry, Shankar

    1991-01-01

    A technique of indirect adaptive control based on certainty equivalence for input output linearization of nonlinear systems is proven convergent. It does not suffer from the overparameterization drawbacks of the direct adaptive control techniques on the same plant. This paper also contains a semiindirect adaptive controller which has several attractive features of both the direct and indirect schemes.

  2. Multicriterial Evaluation of Applying Japanese Management Concepts, Methods and Techniques

    OpenAIRE

    Podobiński, Mateusz

    2014-01-01

    Japanese management concepts, methods and techniques refer to work organization and improvements to companies’ functioning. They appear in numerous Polish companies, especially in the manufacturing ones. Cultural differences are a major impediment in their implementation. Nevertheless, the advantages of using Japanese management concepts, methods and techniques motivate the management to implement them in the company. The author shows research results, which refer to advanta...

  3. Photoacoustic technique applied to the study of skin and leather

    Science.gov (United States)

    Vargas, M.; Varela, J.; Hernández, L.; González, A.

    1998-08-01

    In this paper the photoacoustic technique is used in bull skin for the determination of thermal and optical properties as a function of the tanning process steps. Our results show that the photoacoustic technique is sensitive to the study of physical changes in this kind of material due to the tanning process.

  4. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  5. Manifold learning techniques and model reduction applied to dissipative PDEs

    OpenAIRE

    Sonday, Benjamin E.; Singer, Amit; Gear, C. William; Kevrekidis, Ioannis G.

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relati...

  6. GPS-based ionospheric tomography with a constrained adaptive simultaneous algebraic reconstruction technique

    Indian Academy of Sciences (India)

    Wen Debao; Zhang Xiao; Tong Yangjin; Zhang Guangsheng; Zhang Min; Leng Rusong

    2015-03-01

    In this paper, a constrained adaptive simultaneous algebraic reconstruction technique (CASART) is presented to obtain high-quality reconstructions from insufficient projections. According to the continuous smoothness of the variations of ionospheric electron density (IED) among neighbouring voxels, Gauss weighted function is introduced to constrain the tomography system in the new method. It can resolve the dependence on the initial values for those voxels without any GPS rays traversing them. Numerical simulation scheme is devised to validate the feasibility of the new algorithm. Some comparisons are made to demonstrate the superiority of the new method. Finally, the actual GPS observations are applied to further validate the feasibility and superiority of the new algorithm.

  7. Cognitive task analysis: Techniques applied to airborne weapons training

    Energy Technology Data Exchange (ETDEWEB)

    Terranova, M.; Seamster, T.L.; Snyder, C.E.; Treitler, I.E. (Oak Ridge National Lab., TN (USA); Carlow Associates, Inc., Fairfax, VA (USA); Martin Marietta Energy Systems, Inc., Oak Ridge, TN (USA); Tennessee Univ., Knoxville, TN (USA))

    1989-01-01

    This is an introduction to cognitive task analysis as it may be used in Naval Air Systems Command (NAVAIR) training development. The focus of a cognitive task analysis is human knowledge, and its methods of analysis are those developed by cognitive psychologists. This paper explains the role that cognitive task analysis and presents the findings from a preliminary cognitive task analysis of airborne weapons operators. Cognitive task analysis is a collection of powerful techniques that are quantitative, computational, and rigorous. The techniques are currently not in wide use in the training community, so examples of this methodology are presented along with the results. 6 refs., 2 figs., 4 tabs.

  8. Metamodeling Techniques Applied to the Design of Reconfigurable Control Applications

    Directory of Open Access Journals (Sweden)

    Fogliazza Giuseppe

    2008-01-01

    Full Text Available Abstract In order to realize autonomous manufacturing systems in environments characterized by high dynamics and high complexity of task, it is necessary to improve the control system modelling and performance. This requires the use of better and reusable abstractions. In this paper, we explore the metamodel techniques as a foundation to the solution of this problem. The increasing popularity of model-driven approaches and a new generation of tools to support metamodel techniques are changing software engineering landscape, boosting the adoption of new methodologies for control application development.

  9. Metamodeling Techniques Applied to the Design of Reconfigurable Control Applications

    Directory of Open Access Journals (Sweden)

    Luca Ferrarini

    2008-02-01

    Full Text Available In order to realize autonomous manufacturing systems in environments characterized by high dynamics and high complexity of task, it is necessary to improve the control system modelling and performance. This requires the use of better and reusable abstractions. In this paper, we explore the metamodel techniques as a foundation to the solution of this problem. The increasing popularity of model-driven approaches and a new generation of tools to support metamodel techniques are changing software engineering landscape, boosting the adoption of new methodologies for control application development.

  10. Flash radiographic technique applied to fuel injector sprays

    International Nuclear Information System (INIS)

    A flash radiographic technique, using 50 ns exposure times, was used to study the pattern and density distribution of a fuel injector spray. The experimental apparatus and method are described. An 85 kVp flash x-ray generator, designed and fabricated at the Lawrence Livermore Laboratory, is utilized. Radiographic images, recorded on standard x-ray films, are digitized and computer processed

  11. Software factory techniques applied to Process Control at CERN

    CERN Multimedia

    Dutour, MD

    2007-01-01

    The CERN Large Hadron Collider (LHC) requires constant monitoring and control of quantities of parameters to guarantee operational conditions. For this purpose, a methodology called UNICOS (UNIfied Industrial COntrols Systems) has been implemented to standardize the design of process control applications. To further accelerate the development of these applications, we migrated our existing UNICOS tooling suite toward a software factory in charge of assembling project, domain and technical information seamlessly into deployable PLC (Programmable logic Controller) – SCADA (Supervisory Control And Data Acquisition) systems. This software factory delivers consistently high quality by reducing human error and repetitive tasks, and adapts to user specifications in a cost-efficient way. Hence, this production tool is designed to encapsulate and hide the PLC and SCADA target platforms, enabling the experts to focus on the business model rather than specific syntaxes and grammars. Based on industry standard software...

  12. Applying Website Usability Testing Techniques to Promote E-services

    OpenAIRE

    Abdel Nasser H. Zaied; Hassan, Mohamed M.; Islam S. Mohamed

    2015-01-01

    In this competitive world, websites are considered to be a key aspect of any organization’s competitiveness. In addition to visual esthetics, usability of a website is a strong determinant for user’s satisfaction and pleasure. However, lack of appropriate techniques and attributes for measuring usability may constrain the usefulness of a website. To address this issue, we conduct a statistical study to evaluate the usability levels of e-learning and e-training websites based on human (user) p...

  13. Signal Processing Techniques Applied in RFI Mitigation of Radio Astronomy

    OpenAIRE

    Sixiu Wang; Zhengwen Sun; Weixia Wang; Liangquan Jia

    2012-01-01

    Radio broadcast and telecommunications are present at different power levels everywhere on Earth. Radio Frequency Interference (RFI) substantially limits the sensitivity of existing radio telescopes in several frequency bands and may prove to be an even greater obstacle for next generation of telescopes (or arrays) to overcome. A variety of RFI detection and mitigation techniques have been developed in recent years. This study describes various signal process methods of RFI mitigation in radi...

  14. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  15. Applying a Splitting Technique to Estimate Electrical Grid Reliability

    OpenAIRE

    Wadman, Wander; Crommelin, Daan; Frank, Jason; Pasupathy, R.; Kim, S.-H.; Tolk, A.; Hill, R; Kuhl, M.E.

    2013-01-01

    As intermittent renewable energy penetrates electrical power grids more and more, assessing grid reliability is of increasing concern for grid operators. Monte Carlo simulation is a robust and popular technique to estimate indices for grid reliability, but the involved computational intensity may be too high for typical reliability analyses. We show that various reliability indices can be expressed as expectations depending on the rare event probability of a so-called power curtailment, and e...

  16. Technology Assessment of Dust Suppression Techniques Applied During Structural Demolition

    Energy Technology Data Exchange (ETDEWEB)

    Boudreaux, J.F.; Ebadian, M.A.; Williams, P.T.; Dua, S.K.

    1998-10-20

    Hanford, Fernald, Savannah River, and other sites are currently reviewing technologies that can be implemented to demolish buildings in a cost-effective manner. In order to demolish a structure properly and, at the same time, minimize the amount of dust generated from a given technology, an evaluation must be conducted to choose the most appropriate dust suppression technology given site-specific conditions. Thus, the purpose of this research, which was carried out at the Hemispheric Center for Environmental Technology (HCET) at Florida International University, was to conduct an experimental study of dust aerosol abatement (dust suppression) methods as applied to nuclear D and D. This experimental study targeted the problem of dust suppression during the demolition of nuclear facilities. The resulting data were employed to assist in the development of mathematical correlations that can be applied to predict dust generation during structural demolition.

  17. Detecting discontinuities in time series of upper air data: Demonstration of an adaptive filter technique

    Energy Technology Data Exchange (ETDEWEB)

    Zurbenko, I.; Chen, J.; Rao, S.T. [State Univ. of New York, Albany, NY (United States)] [and others

    1997-11-01

    The issue of global climate change due to increased anthropogenic emissions of greenhouse gases in the atmosphere has gained considerable attention and importance. Climate change studies require the interpretation of weather data collected in numerous locations and/or over the span of several decades. Unfortunately, these data contain biases caused by changes in instruments and data acquisition procedures. It is essential that biases are identified and/or removed before these data can be used confidently in the context of climate change research. The purpose of this paper is to illustrate the use of an adaptive moving average filter and compare it with traditional parametric methods. The advantage of the adaptive filter over traditional parametric methods is that it is less effected by seasonal patterns and trends. The filter has been applied to upper air relative humidity and temperature data. Applied to generated data, the filter has a root mean squared error accuracy of about 600 days when locating changes of 0.1 standard deviations and about 20 days for changes of 0.5 standard deviations. In some circumstances, the accuracy of location estimation can be improved through parametric techniques used in conjunction with the adaptive filter.

  18. Unconventional Coding Technique Applied to Multi-Level Polarization Modulation

    Science.gov (United States)

    Rutigliano, G. G.; Betti, S.; Perrone, P.

    2016-05-01

    A new technique is proposed to improve information confidentiality in optical-fiber communications without bandwidth consumption. A pseudorandom vectorial sequence was generated by a dynamic system algorithm and used to codify a multi-level polarization modulation based on the Stokes vector. Optical-fiber birefringence, usually considered as a disturbance, was exploited to obfuscate the signal transmission. At the receiver end, the same pseudorandom sequence was generated and used to decode the multi-level polarization modulated signal. The proposed scheme, working at the physical layer, provides strong information security without introducing complex processing and thus latency.

  19. Image analysis technique applied to lock-exchange gravity currents

    OpenAIRE

    Nogueira, Helena; Adduce, Claudia; Alves, Elsa; Franca, Rodrigues Pereira Da; Jorge, Mario

    2013-01-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in th...

  20. Applying Supervised Opinion Mining Techniques on Online User Reviews

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2012-01-01

    Full Text Available In recent years, the spectacular development of web technologies, lead to an enormous quantity of user generated information in online systems. This large amount of information on web platforms make them viable for use as data sources, in applications based on opinion mining and sentiment analysis. The paper proposes an algorithm for detecting sentiments on movie user reviews, based on naive Bayes classifier. We make an analysis of the opinion mining domain, techniques used in sentiment analysis and its applicability. We implemented the proposed algorithm and we tested its performance, and suggested directions of development.

  1. Neutrongraphy technique applied to the narcotics and terrorism enforcement

    International Nuclear Information System (INIS)

    Among the several methods of non-destructive essays that may be used for the detection of both drugs and explosives, the ones that utilize nuclear techniques have demonstrated to possess essential qualities for an efficient detection system. These techniques allow the inspection of a large quantity of samples fast, sensibly, specifically and with automatic decision, for they utilize radiation of great power of penetration. This work aims to show the neutron radiography and computed tomography potentiality for the detection of the drugs and explosives even when they are concealed by heavy materials. In the radiographic essays with thermal neutrons, samples of powder cocaine and explosives were inspected, concealed by several materials or not. The samples were irradiated during 30 minutes in the J-9 channel of the Argonauta research reactor of the IEN/CNEN in a neutron flux of 2:5 105 n/cm2.s. We used two sheets of gadolinium converter with a thickness of 25 μm each one and a Kodak Industrex A5 photographic plaque. A comparative analysis among the tomographic images experimental and simulated obtained by X-ray, fast and thermal neutron is presented. The thermal neutron tomography demonstrate to be the best. (author)

  2. Surgical treatment of scoliosis: a review of techniques currently applied

    Directory of Open Access Journals (Sweden)

    Maruyama Toru

    2008-04-01

    Full Text Available Abstract In this review, basic knowledge and recent innovation of surgical treatment for scoliosis will be described. Surgical treatment for scoliosis is indicated, in general, for the curve exceeding 45 or 50 degrees by the Cobb's method on the ground that: 1 Curves larger than 50 degrees progress even after skeletal maturity. 2 Curves of greater magnitude cause loss of pulmonary function, and much larger curves cause respiratory failure. 3 Larger the curve progress, more difficult to treat with surgery. Posterior fusion with instrumentation has been a standard of the surgical treatment for scoliosis. In modern instrumentation systems, more anchors are used to connect the rod and the spine, resulting in better correction and less frequent implant failures. Segmental pedicle screw constructs or hybrid constructs using pedicle screws, hooks, and wires are the trend of today. Anterior instrumentation surgery had been a choice of treatment for the thoracolumbar and lumbar scoliosis because better correction can be obtained with shorter fusion levels. Recently, superiority of anterior surgery for the thoracolumbar and lumbar scoliosis has been lost. Initial enthusiasm for anterior instrumentation for the thoracic curve using video assisted thoracoscopic surgery technique has faded out. Various attempts are being made with use of fusionless surgery. To control growth, epiphysiodesis on the convex side of the deformity with or without instrumentation is a technique to provide gradual progressive correction and to arrest the deterioration of the curves. To avoid fusion for skeletally immature children with spinal cord injury or myelodysplasia, vertebral wedge ostetomies are performed for the treatment of progressive paralytic scoliosis. For right thoracic curve with idiopathic scoliosis, multiple vertebral wedge osteotomies without fusion are performed. To provide correction and maintain it during the growing years while allowing spinal growth for

  3. Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems

    Science.gov (United States)

    Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul

    2009-01-01

    Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…

  4. Signal Processing Techniques Applied in RFI Mitigation of Radio Astronomy

    Directory of Open Access Journals (Sweden)

    Sixiu Wang

    2012-08-01

    Full Text Available Radio broadcast and telecommunications are present at different power levels everywhere on Earth. Radio Frequency Interference (RFI substantially limits the sensitivity of existing radio telescopes in several frequency bands and may prove to be an even greater obstacle for next generation of telescopes (or arrays to overcome. A variety of RFI detection and mitigation techniques have been developed in recent years. This study describes various signal process methods of RFI mitigation in radio astronomy, choose the method of Time-frequency domain cancellation to eliminate certain interference and effectively improve the signal to noise ratio in pulsar observations. Finally, RFI mitigation researches and implements in China radio astronomy will be also presented.

  5. Discrete filtering techniques applied to sequential GPS range measurements

    Science.gov (United States)

    Vangraas, Frank

    1987-01-01

    The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.

  6. Applying Business Process Mode ling Techniques : Case Study

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2010-12-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were implemented in practice in recent decades. Most significant of the notations include ARIS, Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contempo-rary bus iness process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, me-thodology of res earch is discussed. The following section presents selected case study results. The paper is concluded with a summary

  7. Quantitative Portfolio Optimization Techniques Applied to the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    André Alves Portela Santos

    2012-09-01

    Full Text Available In this paper we assess the out-of-sample performance of two alternative quantitative portfolio optimization techniques - mean-variance and minimum variance optimization – and compare their performance with respect to a naive 1/N (or equally-weighted portfolio and also to the market portfolio given by the Ibovespa. We focus on short selling-constrained portfolios and consider alternative estimators for the covariance matrices: sample covariance matrix, RiskMetrics, and three covariance estimators proposed by Ledoit and Wolf (2003, Ledoit and Wolf (2004a and Ledoit and Wolf (2004b. Taking into account alternative portfolio re-balancing frequencies, we compute out-of-sample performance statistics which indicate that the quantitative approaches delivered improved results in terms of lower portfolio volatility and better risk-adjusted returns. Moreover, the use of more sophisticated estimators for the covariance matrix generated optimal portfolios with lower turnover over time.

  8. A Robust Text Processing Technique Applied to Lexical Error Recovery

    CERN Document Server

    Ingels, P

    1999-01-01

    This thesis addresses automatic lexical error recovery and tokenization of corrupt text input. We propose a technique that can automatically correct misspellings, segmentation errors and real-word errors in a unified framework that uses both a model of language production and a model of the typing behavior, and which makes tokenization part of the recovery process. The typing process is modeled as a noisy channel where Hidden Markov Models are used to model the channel characteristics. Weak statistical language models are used to predict what sentences are likely to be transmitted through the channel. These components are held together in the Token Passing framework which provides the desired tight coupling between orthographic pattern matching and linguistic expectation. The system, CTR (Connected Text Recognition), has been tested on two corpora derived from two different applications, a natural language dialogue system and a transcription typing scenario. Experiments show that CTR can automatically correct...

  9. Applying Data Privacy Techniques on Tabular Data in Uganda

    CERN Document Server

    Mivule, Kato

    2011-01-01

    The growth of Information Technology(IT) in Africa has led to an increase in the utilization of communication networks for data transaction across the continent. A growing number of entities in the private sector, academia, and government, have deployed the Internet as a medium to transact in data, routinely posting statistical and non statistical data online and thereby making many in Africa increasingly dependent on the Internet for data transactions. In the country of Uganda, exponential growth in data transaction has presented a new challenge: What is the most efficient way to implement data privacy. This article discusses data privacy challenges faced by the country of Uganda and implementation of data privacy techniques for published tabular data. We make the case for data privacy, survey concepts of data privacy, and implementations that could be employed to provide data privacy in Uganda.

  10. Free Radical Imaging Techniques Applied to Hydrocarbon Flames Diagnosis

    Institute of Scientific and Technical Information of China (English)

    A. Caldeira-Pires

    2001-01-01

    This paper evaluates the utilization of free radical chemiluminescence imaging and tomographic reconstruction techniques to assess advanced information on reacting flows. Two different laboratory flow configurations were analyzed, including unconfined non-premixed jet flame measurements to evaluate flame fuel/air mixing patterns at the burner-port of a typical glass-furnace burner. The second case characterized the reaction zone of premixed flames within gas turbine combustion chambers, based on a laboratory scale model of a lean prevaporized premixed (LPP) combustion chamber.The analysis shows that advanced imaging diagnosis can provide new information on the characterization of flame mixing and reacting phenomena. The utilization of local C2 and CH chemiluminescence can assess useful information on the quality of the combustion process, which can be used to improve the design of practical combustors.

  11. Innovative Visualization Techniques applied to a Flood Scenario

    Science.gov (United States)

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other

  12. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    Directory of Open Access Journals (Sweden)

    Carlos Astua

    2014-04-01

    Full Text Available The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.

  13. Object detection techniques applied on mobile robot semantic navigation.

    Science.gov (United States)

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-04-11

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.

  14. Technology Assessment of Dust Suppression Techniques applied During Structural Demolition

    Energy Technology Data Exchange (ETDEWEB)

    Boudreaux, J.F.; Ebadian, M.A.; Dua, S.K.

    1997-08-06

    Hanford, Fernald, Savannah River, and other sites are currently reviewing technologies that can be implemented to demolish buildings in a cost-effective manner. In order to demolish a structure and, at the same time, minimize the amount of dust generated by a given technology, an evaluation must be conducted to choose the most appropriate dust suppression technology. Thus, the purpose of this research, which was conducted by the Hemispheric Center for Environmental Technology (HCET) at Florida International University (FIU), was to perform an experimental study of dust aerosol abatement (dust suppression) methods as applied to nuclear D and D. This experimental study specifically targeted the problem of dust suppression during demolition. The resulting data were used in the development of mathematical correlations that can be applied to structural demolition. In the Fiscal Year 1996 (FY96), the effectiveness of different dust suppressing agents was investigated for different types of concrete blocks. Initial tests were conducted in a broad particle size range. In Fiscal Year 1997 (FY97), additional tests were performed in the size range in which most of the particles were detected. Since particle distribution is an important parameter for predicting deposition in various compartments of the human respiratory tract, various tests were aimed at determining the particle size distribution of the airborne dust particles. The effectiveness of dust suppressing agents for particles of various size was studied. Instead of conducting experiments on various types of blocks, it was thought prudent to carry out additional tests on blocks of the same type. Several refinements were also incorporated in the test procedures and data acquisition system used in FY96.

  15. Flipped Classroom Adapted to the ARCS Model of Motivation and Applied to a Physics Course

    Science.gov (United States)

    Asiksoy, Gülsüm; Özdamli, Fezile

    2016-01-01

    This study aims to determine the effect on the achievement, motivation and self-sufficiency of students of the flipped classroom approach adapted to Keller's ARCS (Attention, Relevance, Confidence and Satisfaction) motivation model and applied to a physics course. The study involved 66 students divided into two classes of a physics course. The…

  16. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...

  17. Optical Trapping Techniques Applied to the Study of Cell Membranes

    Science.gov (United States)

    Morss, Andrew J.

    Optical tweezers allow for manipulating micron-sized objects using pN level optical forces. In this work, we use an optical trapping setup to aid in three separate experiments, all related to the physics of the cellular membrane. In the first experiment, in conjunction with Brian Henslee, we use optical tweezers to allow for precise positioning and control of cells in suspension to evaluate the cell size dependence of electroporation. Theory predicts that all cells porate at a transmembrane potential VTMof roughly 1 V. The Schwann equation predicts that the transmembrane potential depends linearly on the cell radius r, thus predicting that cells should porate at threshold electric fields that go as 1/r. The threshold field required to induce poration is determined by applying a low voltage pulse to the cell and then applying additional pulses of greater and greater magnitude, checking for poration at each step using propidium iodide dye. We find that, contrary to expectations, cells do not porate at a constant value of the transmembrane potential but at a constant value of the electric field which we find to be 692 V/cm for K562 cells. Delivering precise dosages of nanoparticles into cells is of importance for assessing toxicity of nanoparticles or for genetic research. In the second experiment, we conduct nano-electroporation—a novel method of applying precise doses of transfection agents to cells—by using optical tweezers in conjunction with a confocal microscope to manipulate cells into contact with 100 nm wide nanochannels. This work was done in collaboration with Pouyan Boukany of Dr. Lee's group. The small cross sectional area of these nano channels means that the electric field within them is extremely large, 60 MV/m, which allows them to electrophoretically drive transfection agents into the cell. We find that nano electroporation results in excellent dose control (to within 10% in our experiments) compared to bulk electroporation. We also find that

  18. Remote sensing techniques applied to seismic vulnerability assessment

    Science.gov (United States)

    Juan Arranz, Jose; Torres, Yolanda; Hahgi, Azade; Gaspar-Escribano, Jorge

    2016-04-01

    Advances in remote sensing and photogrammetry techniques have increased the degree of accuracy and resolution in the record of the earth's surface. This has expanded the range of possible applications of these data. In this research, we have used these data to document the construction characteristics of the urban environment of Lorca, Spain. An exposure database has been created with the gathered information to be used in seismic vulnerability assessment. To this end, we have used data from photogrammetric flights at different periods, using both orthorectified images in the visible and infrared spectrum. Furthermore, the analysis is completed using LiDAR data. From the combination of these data, it has been possible to delineate the building footprints and characterize the constructions with attributes such as the approximate date of construction, area, type of roof and even building materials. To carry out the calculation, we have developed different algorithms to compare images from different times, segment images, classify LiDAR data, and use the infrared data in order to remove vegetation or to compute roof surfaces with height value, tilt and spectral fingerprint. In addition, the accuracy of our results has been validated with ground truth data. Keywords: LiDAR, remote sensing, seismic vulnerability, Lorca

  19. Applying Website Usability Testing Techniques to Promote E-services

    Directory of Open Access Journals (Sweden)

    Abdel Nasser H. Zaied

    2015-09-01

    Full Text Available In this competitive world, websites are considered to be a key aspect of any organization’s competitiveness. In addition to visual esthetics, usability of a website is a strong determinant for user’s satisfaction and pleasure. However, lack of appropriate techniques and attributes for measuring usability may constrain the usefulness of a website. To address this issue, we conduct a statistical study to evaluate the usability levels of e-learning and e-training websites based on human (user perception. The questionnaire is implemented as user based tool, visitors of a website can use it to evaluate the usability of the websites. The results showed that according to the students’ point view the personalization has the first important criterion for the use of the e-learning websites, while according to experts’ point view the accessibility has the first important criterion for the use of the e-learning websites. Also the result indicated that the experienced respondents have demonstrated satisfaction over the usability attributes of e-learning websites they accessed for their learning purposes; while inexperienced students have expressed their perception on the importance of the usability attributes for accessing e-learning websites. When combining and comparing both finings, based on the outcomes it is evident that, all the attributes yielded satisfaction and were felt important.

  20. Digital prototyping technique applied for redesigning plastic products

    Science.gov (United States)

    Pop, A.; Andrei, A.

    2015-11-01

    After products are on the market for some time, they often need to be redesigned to meet new market requirements. New products are generally derived from similar but outdated products. Redesigning a product is an important part of the production and development process. The purpose of this paper is to show that using modern technology, like Digital Prototyping in industry is an effective way to produce new products. This paper tries to demonstrate and highlight the effectiveness of the concept of Digital Prototyping, both to reduce the design time of a new product, but also the costs required for implementing this step. The results of this paper show that using Digital Prototyping techniques in designing a new product from an existing one available on the market mould offers a significantly manufacturing time and cost reduction. The ability to simulate and test a new product with modern CAD-CAM programs in all aspects of production (designing of the 3D model, simulation of the structural resistance, analysis of the injection process and beautification) offers a helpful tool for engineers. The whole process can be realised by one skilled engineer very fast and effective.

  1. Sterile insect technique applied to Queensland fruit fly

    International Nuclear Information System (INIS)

    The Sterile Insect Technique (SIT) aims to suppress or eradicate pest populations by flooding wild populations with sterile males. To control fruit fly million of flies of both sexes are mass reared at the Gosford Post-Harvest laboratory near Sydney, mixed with sawdust and fluorescent dye at the pupal stage and transported to Ansto where they are exposed to low dose of 70-75Gy of gamma radiation from a Cobalt-60 source. Following irradiation the pupae are transported to the release site in plastic sleeves then transferred to large plastic garbage bins for hatching. These bins are held at 30 deg. C. to synchronise hatching and files are released 48-72 hours after hatching begins. In most cases these bins are placed among fruit trees in the form of an 800 metre grid. This maximises survival of the emerging flies which are released on an almost daily basis. Progress of the SIT program is monitored by collecting flies from traps dotted all over the infested site. The ratio of sterile to wild flies can be detected because the sterile files are coated with the fluorescent dust which can be seen under ultra-violet light. If the SIT program is successful entomologists will trap a high proportion of sterile flies to wild flies and this should result in a clear reduction in maggot infestations. Surveillance, quarantine, and trapping activities continue for 8 or 9 months to check for any surviving pockets of infestation. If any are found the SIT program is reactivated. These programs demonstrated that SIT was an efficient and environmental friendly non-chemical control method for eradicating outbreaks or suppressing fruit fly populations in important fruit growing areas. ills

  2. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    Science.gov (United States)

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  3. Adaptive Remote-Sensing Techniques Implementing Swarms of Mobile Agents

    Energy Technology Data Exchange (ETDEWEB)

    Asher, R.B.; Cameron, S.M.; Loubriel, G.M.; Robinett, R.D.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-11-25

    In many situations, stand-off remote-sensing and hazard-interdiction techniques over realistic operational areas are often impractical "and difficult to characterize. An alternative approach is to implement an adap- tively deployable array of sensitive agent-specific devices. Our group has been studying the collective be- havior of an autonomous, multi-agent system applied to chedbio detection and related emerging threat applications, The current physics-based models we are using coordinate a sensor array for mukivanate sig- nal optimization and coverage as re,alized by a swarm of robots or mobile vehicles. These intelligent control systems integrate'glob"ally operating decision-making systems and locally cooperative learning neural net- works to enhance re+-timp operational responses to dynarnical environments examples of which include obstacle avoidance, res~onding to prevailing wind patterns, and overcoming other natural obscurants or in- terferences. Collectively',tkensor nefirons with simple properties, interacting according to basic community rules, can accomplish complex interconnecting functions such as generalization, error correction, pattern recognition, sensor fusion, and localization. Neural nets provide a greater degree of robusmess and fault tolerance than conventional systems in that minor variations or imperfections do not impair performance. The robotic platforms would be equipped with sensor devices that perform opticaI detection of biologicais in combination with multivariate chemical analysis tools based on genetic and neural network algorithms, laser-diode LIDAR analysis, ultra-wideband short-pulsed transmitting and receiving antennas, thermal im- a:ing sensors, and optical Communication technology providing robust data throughput pathways. Mission scenarios under consideration include ground penetrating radar (GPR) for detection of underground struc- tures, airborne systems, and plume migration and mitigation. We will describe our

  4. Adaptive Remote-Sensing Techniques Implementing Swarms of Mobile Agents

    Energy Technology Data Exchange (ETDEWEB)

    Cameron, S.M.; Loubriel, G.M.; Rbinett, R.D. III; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1999-04-01

    This paper focuses on our recent work at Sandia National Laboratories toward engineering a physics-based swarm of mobile vehicles for distributed sensing applications. Our goal is to coordinate a sensor array that optimizes sensor coverage and multivariate signal analysis by implementing artificial intelligence and evolutionary computational techniques. These intelligent control systems integrate both globally operating decision-making systems and locally cooperative information-sharing modes using genetically-trained neural networks. Once trained, neural networks have the ability to enhance real-time operational responses to dynamical environments, such as obstacle avoidance, responding to prevailing wind patterns, and overcoming other natural obscurants or interferences (jammers). The swarm realizes a collective set of sensor neurons with simple properties incorporating interactions based on basic community rules (potential fields) and complex interconnecting functions based on various neural network architectures, Therefore, the swarm is capable of redundant heterogeneous measurements which furnishes an additional degree of robustness and fault tolerance not afforded by conventional systems, while accomplishing such cognitive tasks as generalization, error correction, pattern recognition, and sensor fission. The robotic platforms could be equipped with specialized sensor devices including transmit/receive dipole antennas, chemical or biological sniffers in combination with recognition analysis tools, communication modulators, and laser diodes. Our group has been studying the collective behavior of an autonomous, multi-agent system applied to emerging threat applications. To accomplish such tasks, research in the fields of robotics, sensor technology, and swarms are being conducted within an integrated program. Mission scenarios under consideration include ground penetrating impulse radar (GPR) for detection of under-ground structures, airborne systems, and plume

  5. Beaconless adaptive-optics technique for HEL beam control

    Science.gov (United States)

    Khizhnyak, Anatoliy; Markov, Vladimir

    2016-05-01

    Effective performance of forthcoming laser systems capable of power delivery on a distant target requires an adaptive optics system to correct atmospheric perturbations on the laser beam. The turbulence-induced effects are responsible for beam wobbling, wandering, and intensity scintillation, resulting in degradation of the beam quality and power density on the target. Adaptive optics methods are used to compensate for these negative effects. In its turn, operation of the AOS system requires a reference wave that can be generated by the beacon on the target. This report discusses a beaconless approach for wavefront correction with its performance based on the detection of the target-scattered light. Postprocessing of the beacon-generated light field enables retrieval and detailed characterization of the turbulence-perturbed wavefront -data that is essential to control the adaptive optics module of a high-power laser system.

  6. Correction of respiratory motion for IMRT using aperture adaptive technique and visual guidance: A feasibility study

    International Nuclear Information System (INIS)

    Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice

  7. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    OpenAIRE

    Muller, Laurent Flindt; Spoorendonk, Simon

    2010-01-01

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer programming solver and its built-in feasibility heuristics is used to search a neighborhood for improving solutions. The general reoptimization approach used for repairing solutions is specifically ...

  8. Experimental Investigation on Adaptive Robust Controller Designs Applied to Constrained Manipulators

    Directory of Open Access Journals (Sweden)

    Marco H. Terra

    2013-04-01

    Full Text Available In this paper, two interlaced studies are presented. The first is directed to the design and construction of a dynamic 3D force/moment sensor. The device is applied to provide a feedback signal of forces and moments exerted by the robotic end-effector. This development has become an alternative solution to the existing multi-axis load cell based on static force and moment sensors. The second one shows an experimental investigation on the performance of four different adaptive nonlinear H∞ control methods applied to a constrained manipulator subject to uncertainties in the model and external disturbances. Coordinated position and force control is evaluated. Adaptive procedures are based on neural networks and fuzzy systems applied in two different modeling strategies. The first modeling strategy requires a well-known nominal model for the robot, so that the intelligent systems are applied only to estimate the effects of uncertainties, unmodeled dynamics and external disturbances. The second strategy considers that the robot model is completely unknown and, therefore, intelligent systems are used to estimate these dynamics. A comparative study is conducted based on experimental implementations performed with an actual planar manipulator and with the dynamic force sensor developed for this purpose.

  9. Assessment of Service Protocols Adaptability Using a Novel Path Computation Technique

    Science.gov (United States)

    Zhou, Zhangbing; Bhiri, Sami; Haller, Armin; Zhuge, Hai; Hauswirth, Manfred

    In this paper we propose a new kind of adaptability assessment that determines whether service protocols of a requestor and a provider are adaptable, computes their adaptation degree, and identifies conditions that determine when they can be adapted. We also propose a technique that implements this adaptability assessment: (1) we construct a complete adaptation graph that captures all service interactions adaptable between these two service protocols. The emptiness or non-emptiness of this graph corresponds to the fact that whether or not they are adaptable; (2) we propose a novel path computation technique to generate all instance sub-protocols which reflect valid executions of a particular service protocol, and to derive all instance sub-protocol pairs captured by the complete adaptation graph. An adaptation degree is computed as a ratio between the number of instance sub-protocols captured by these instance sub-protocol pairs with respect to a service protocol and that of this service protocol; (3) and finally we identify a set of conditions based on these instance sub-protocol pairs. A condition is the conjunction of all conditions specified on the transitions of a given pair of instance sub-protocols. This assessment is a comprehensive means of selecting the suitable service protocol among functionally-equivalent candidates according to the requestor's business requirements.

  10. Learning Rate Updating Methods Applied to Adaptive Fuzzy Equalizers for Broadband Power Line Communications

    Directory of Open Access Journals (Sweden)

    Ribeiro Moisés V

    2004-01-01

    Full Text Available This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.

  11. An adaptive range-query optimization technique with distributed replicas

    Institute of Scientific and Technical Information of China (English)

    Sayar Ahmet; Pierce Marlon; Fox C.Geoffrey

    2014-01-01

    Replication is an approach often used to speed up the execution of queries submitted to a large dataset. A compile-time/run-time approach is presented for minimizing the response time of 2-dimensional range when a distributed replica of a dataset exists. The aim is to partition the query payload (and its range) into subsets and distribute those to the replica nodes in a way that minimizes a client’s response time. However, since query size and distribution characteristics of data (data dense/sparse regions) in varying ranges are not known a priori, performing efficient load balancing and parallel processing over the unpredictable workload is difficult. A technique based on the creation and manipulation of dynamic spatial indexes for query payload estimation in distributed queries was proposed. The effectiveness of this technique was demonstrated on queries for analysis of archived earthquake-generated seismic data records.

  12. Adapted strategic plannig model applied to small business: a case study in the fitness area

    Directory of Open Access Journals (Sweden)

    Eduarda Tirelli Hennig

    2012-06-01

    Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.

  13. A neuro-evolutive technique applied for predicting the liquid crystalline property of some organic compounds

    Science.gov (United States)

    Drăgoi, Elena-Niculina; Curteanu, Silvia; Lisa, Cătălin

    2012-10-01

    A simple self-adaptive version of the differential evolution algorithm was applied for simultaneous architectural and parametric optimization of feed-forward neural networks, used to classify the crystalline liquid property of a series of organic compounds. The developed optimization methodology was called self-adaptive differential evolution neural network (SADE-NN) and has the following characteristics: the base vector used is chosen as the best individual in the current population, two differential terms participate in the mutation process, the crossover type is binomial, a simple self-adaptive mechanism is employed to determine the near-optimal control parameters of the algorithm, and the integration of the neural network into the differential evolution algorithm is performed using a direct encoding scheme. It was found that a network with one hidden layer is able to make accurate predictions, indicating that the proposed methodology is efficient and, owing to its flexibility, it can be applied to a large range of problems.

  14. Adaptive Ant Colony Clustering Method Applied to Finding Closely Communicating Community

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2012-02-01

    Full Text Available The investigation of community structures in networks is an important issue in many domains and disciplines. Closely communicating community is different from the traditional community which emphasize particularly on structure or context. Our previous method played more emphasis on the feasibility that ant colony algorithm applied to community detection. However the essence of closely communicating community did not be described clearly. In this paper, the definition of closely communicating community is put forward firstly, the four features are described and corresponding methods are introduced to achieve the value of features between each pair. Meanwhile, pair propinquity and local propinquity are put forward and used to guide ants’ decision. Based on the previous work, the closely communicating community detection method is improved in four aspects of adaptive adjusting, which are entropy based weight modulation, combining historical paths and random wandering to select next coordination, the strategy of forcing unloading and the adaptive change of ant’s eyesight. The value selection of parameters is discussed in the portion of experiments, and the results also reveal the improvement of our algorithm in adaptive djusting.

  15. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    Science.gov (United States)

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi

    2016-04-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.

  16. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques.

    Science.gov (United States)

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2016-04-21

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  17. My Solar System: A Developmentally Adapted Eco-Mapping Technique for Children

    Science.gov (United States)

    Curry, Jennifer R.; Fazio-Griffith, Laura J.; Rohr, Shannon N.

    2008-01-01

    Counseling children requires specific skills and techniques, such as play therapy and expressive arts, to address developmental manifestations and to facilitate the understanding of presenting problems. This article outlines an adapted eco-mapping activity that can be used as a creative counseling technique with children in order to promote…

  18. An adaptive laser beam shaping technique based on a genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Ping Yang; Yuan Liu; Wei Yang; Minwu Ao; Shijie Hu; Bing Xu; Wenhan Jiang

    2007-01-01

    @@ A new adaptive beam intensity shaping technique based on the combination of a 19-element piezo-electricity deformable mirror (DM) and a global genetic algorithm is presented. This technique can adaptively adjust the voltages of the 19 actuators on the DM to reduce the difference between the target beam shape and the actual beam shape. Numerical simulations and experimental results show that within the stroke range of the DM, this technique can be well used to create the given beam intensity profiles on the focal plane.

  19. Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The performance of analytical derivative and sparse matrix techniques applied to a traditional densesequential quadratic programming(SQP) is studied, and the strategy utilizing those techniques is also presented. Computational results on two typicalchemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy ispromising and suitable for large-scale process optimization problems.

  20. Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    钟卫涛; 邵之江; 张余岳; 钱积新

    2000-01-01

    The performance of analytical derivative and sparse matrix techniques applied to a traditional dense sequential quadratic programming (SQP) is studied, and the strategy utilizing those techniques is also presented.Computational results on two typical chemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy is promising and suitable for large-scale process optimization problems.

  1. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  2. (Costing) The adaption of product cost estimation techniques to estimate the cost of service.

    OpenAIRE

    Huang, Estelle; Newnes, Linda B; Parry, Glenn

    2011-01-01

    Abstract This paper presents an approach to ascertain whether product cost estimating techniques can be adapted for use in estimating the costs for providing a service. The research methodology adopted consists of a critique and analysis of the literature to ascertain how current cost estimation techniques are used. The analysis of the cost estimation techniques provides knowledge of cost estimation, in particular for products and service with advantages and drawbacks defined. Th...

  3. Research on key techniques of virtual reality applied in mining industry

    Institute of Scientific and Technical Information of China (English)

    LIAO Jun; LU Guo-bin

    2009-01-01

    Based on the applications of virtual reality technology in many fields, introduced the virtual reality technical basic concept, structure type, related technique development, etc., tallied up applications of virtual reality technique in the present mining industry, inquired into core techniques related software and hardware, especially the optimization in the setup of various 3D models technique, and carried out a virtual scene to travel extensively in real-time by stereoscopic manifestation technique and so on. Then it brought forward the solution of virtual reality technique with software and hardware to the mining industry that can satisfy the demand of different aspects and levers. Finally, it show a fine prospect of virtual reality technique applied in the mining industry.

  4. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    Science.gov (United States)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  5. Comparison of different automatic adaptive threshold selection techniques for estimating discharge from river width

    Science.gov (United States)

    Elmi, Omid; Javad Tourian, Mohammad; Sneeuw, Nico

    2015-04-01

    The importance of river discharge monitoring is critical for e.g., water resource planning, climate change, hazard monitoring. River discharge has been measured at in situ gauges for more than a century. Despite various attempts, some basins are still ungauged. Moreover, a reduction in the number of worldwide gauging stations increases the interest to employ remote sensing data for river discharge monitoring. Finding an empirical relationship between simultaneous in situ measurements of discharge and river widths derived from satellite imagery has been introduced as a straightforward remote sensing alternative. Classifying water and land in an image is the primary task for defining the river width. Water appears dark in the near infrared and infrared bands in satellite images. As a result low values in the histogram usually represent the water content. In this way, applying a threshold on the image histogram and separating into two different classes is one of the most efficient techniques to build a water mask. Beside its simple definition, finding the appropriate threshold value in each image is the most critical issue. The threshold is variable due to changes in the water level, river extent, atmosphere, sunlight radiation, onboard calibration of the satellite over time. These complexities in water body classification are the main source of error in river width estimation. In this study, we are looking for the most efficient adaptive threshold algorithm to estimate the river discharge. To do this, all cloud free MODIS images coincident with the in situ measurement are collected. Next a number of automatic threshold selection techniques are employed to generate different dynamic water masks. Then, for each of them a separate empirical relationship between river widths and discharge measurements are determined. Through these empirical relationships, we estimate river discharge at the gauge and then validate our results against in situ measurements and also

  6. Adaptive Pointing Design and Evaluation of a Precision Enhancing Technique for Absolute Pointing Devices

    OpenAIRE

    König, Werner A.; Gerken, Jens; Dierdorf, Stefan; Reiterer, Harald

    2009-01-01

    We present Adaptive Pointing, a novel approach to addressing the common problem of accuracy when using absolute pointing devices for distant interaction. First, we discuss extensively some related work concerning the problem-domain of pointing accuracy when using absolute or relative pointing devices. As a result, we introduce a novel classification scheme to more clearly discriminate between different approaches. Second, the Adaptive Pointing technique is presented and described in detail. ...

  7. Investigation about the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils

    Directory of Open Access Journals (Sweden)

    Adriano Pinto Mariano

    2009-10-01

    Full Text Available This work investigated the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils collected at three service stations. Batch biodegradation experiments were carried out in Bartha biometer flasks (250 mL used to measure the microbial CO2 production. Biodegradation efficiency was also measured by quantifying the concentration of hydrocarbons. In addition to the biodegradation experiments, the capability of the studied cultures and the native microorganisms to biodegrade the diesel oil purchased from a local service station, was verified using a technique based on the redox indicator 2,6 -dichlorophenol indophenol (DCPIP. Results obtained with this test showed that the inocula used in the biodegradation experiments were able to degrade the diesel oil and the tests carried out with the native microorganisms indicated that these soils had a microbiota adapted to degrade the hydrocarbons. In general, no gain was obtained with the addition of microorganisms or even negative effects were observed in the biodegradation experiments.Este trabalho investigou a eficiência da técnica do bioaumento quando aplicada a solos contaminados com óleo diesel coletados em três postos de combustíveis. Experimentos de biodegradação foram realizados em frascos de Bartha (250 mL, usados para medir a produção microbiana de CO2. A eficiência de biodegradação também foi quantificada pela concentração de hidrocarbonetos. Conjuntamente aos experimentos de biodegradação, a capacidade das culturas estudadas e dos microrganismos nativos em biodegradar óleo diesel comprado de um posto de combustíveis local, foi verificada utilizando-se a técnica baseada no indicador redox 2,6 - diclorofenol indofenol (DCPIP. Resultados obtidos com esse teste mostraram que os inóculos empregados nos experimentos de biodegradação foram capazes de biodegradar óleo diesel e os testes com os microrganismos nativos indicaram que estes solos

  8. Adaptive and model-based control theory applied to convectively unstable flows

    CERN Document Server

    Fabbiane, N; Bagheri, S; Henningson, D S

    2014-01-01

    Research on active control for the delay of laminar-turbulent transition in boundary layers has made a significant progress in the last two decades, but the employed strategies have been many and dispersed. Using one framework, we review model-based techniques, such as linear-quadratic regulators, and model-free adaptive methods, such as least-mean square filters. The former are supported by a elegant and powerful theoretical basis, whereas the latter may provide a more practical approach in the presence of complex disturbance environments, that are difficult to model. We compare the methods with a particular focus on efficiency, practicability and robustness to uncertainties. Each step is exemplified on the one-dimensional linearized Kuramoto-Sivashinsky equation, that shows many similarities with the initial linear stages of the transition process of the flow over a flat plate. Also, the source code for the examples are provided.

  9. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    DEFF Research Database (Denmark)

    Wang, Yong; Cai, Zixing; Zhou, Yuren;

    2009-01-01

    mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

  10. An efficient Video Segmentation Algorithm with Real time Adaptive Threshold Technique

    Directory of Open Access Journals (Sweden)

    Yasira Beevi C P

    2009-12-01

    Full Text Available Automatic video segmentation plays an important role in real-time MPEG-4 encoding systems. This paper presents a video segmentation algorithm for MPEG-4 camera system with change detection, background registration techniques and real time adaptive thresholdtechniques. This algorithm can give satisfying segmentation results with low computation load. Besides, it has shadow cancellation mode, which can deal with light changing effect and shadow effect. Furthermore, this algorithm also implemented real time adaptive threshold techniques by which the parameters can be decided automatically.

  11. Frequency and Spatial Domains Adaptive-based Enhancement Technique for Thermal Infrared Images

    Directory of Open Access Journals (Sweden)

    Debasis Chaudhuri

    2014-09-01

    Full Text Available Low contrast and noisy image limits the amount of information conveyed to the user. With the proliferation of digital imagery and computer interface between man-and-machine, it is now viable to consider digital enhancement in the image before presenting it to the user, thus increasing the information throughput. With better contrast, target detection and discrimination can be improved. The paper presents a sequence of filtering operations in frequency and spatial domains to improve the quality of the thermal infrared (IR images. Basically, two filters – homomorphic filter followed by adaptive Gaussian filter are applied to improve the quality of the thermal IR images. We have systematically evaluated the algorithm on a variety of images and carefully compared it with the techniques presented in the literature. We performed an evaluation of three filter banks such as homomorphic, Gaussian 5×5 and the proposed method, and we have seen that the proposed method yields optimal PSNR for all the thermal images. The results demonstrate that the proposed algorithm is efficient for enhancement of thermal IR images.Defence Science Journal, Vol. 64, No. 5, September 2014, pp.451-457, DOI:http://dx.doi.org/10.14429/dsj.64.6873

  12. Data Mining E-protokol - Applying data mining techniques on student absence

    OpenAIRE

    Shrestha, Amardip; Bro Lilleås, Lauge; Hansen, Asbjørn

    2014-01-01

    The scope of this project is to explore the possibilities in applying data mining techniques for discovering new knowledge about student absenteeism in primary school. The research consists in analyzing a large dataset collected through the digital protocol system E-protokol. The data mining techniques used for the analysis involves clustering, classification and association rule mining, which are utilized using the machine learning toolset WEKA. The findings includes a number of suggestions ...

  13. Development of Promising Insulating Oil and Applied Techniques of EHD, ER·MR

    Science.gov (United States)

    Hanaoka, Ryoichi

    The development of an environment-friendly insulating liquid has been noticed for a new design of oil-filled power apparatus such as transformer from viewpoints of the protection of the environment. The dielectric liquids can also widely be applied to various fields which are concerned in the electromagnetic field. This article introduces the recent trend on promising new vegetable based oil as an electrical insulation, and EHD pumping, ER fluid and MR fluid as the applied techniques of dielectric liquids.

  14. Streamline upwind finite element method using 6-node triangular element with adaptive remeshing technique for convective-diffusion problems

    Institute of Scientific and Technical Information of China (English)

    Niphon Wansophark; Pramote Dechaumphai

    2008-01-01

    A streamline upwind finite element method using 6-node triangular element is presented.The method is applied to the convection term of the governing transport equation directly along local streamlines.Several convective-diffusion examples are used to evaluate efficiency of the method.Results show that the method is monotonic and does not produce any oscillation.In addition,an adaptive meshing technique is combined with the method to further increase accuracy of the solution,and at the same time,to minimize computational time and computer memory requirement.

  15. Raviart–Thomas-type sources adapted to applied EEG and MEG: implementation and results

    International Nuclear Information System (INIS)

    This paper studies numerically electroencephalography and magnetoencephalography (EEG and MEG), two non-invasive imaging modalities in which external measurements of the electric potential and the magnetic field are, respectively, utilized to reconstruct the primary current density (neuronal activity) of the human brain. The focus is on adapting a Raviart–Thomas-type source model to meet the needs of EEG and MEG applications. The goal is to construct a model that provides an accurate approximation of dipole source currents and can be flexibly applied to different reconstruction strategies as well as to realistic computation geometries. The finite element method is applied in the simulation of the data. Least-squares fit interpolation is used to establish Cartesian source directions, which guarantee that the recovered current field is minimally dependent on the underlying finite element mesh. Implementation is explained in detail and made accessible, e.g., by using quadrature-free formulae and the Gaussian one-point rule in numerical integration. Numerical results are presented concerning, for example, the iterative alternating sequential inverse algorithm as well as resolution, smoothness and local refinement of the finite element mesh. Both spherical and pseudo-realistic head models, as well as real MEG data, are utilized in the numerical experiments. (paper)

  16. A SELF-ADAPTIVE TECHNIQUE FOR A KIND OF NONLINEAR CONJUGATE GRADIENT METHODS

    Institute of Scientific and Technical Information of China (English)

    王丽平

    2004-01-01

    Conjugate gradient methods. are a class of important methods for unconstrained optimization, especially when the dimension is large. In 2001, Dai and Liao have proposed a new conjugate condition, based on it two nonlinear conjugate gradient methods are constructed. With trust region idea, this paper gives a self-adaptive technique for the two methods. The numerical results show that this technique works well for the given nonlinear optimization test problems.

  17. Strategies and techniques of communication and public relations applied to non-profit sector

    Directory of Open Access Journals (Sweden)

    Ioana – Julieta Josan

    2010-05-01

    Full Text Available The aim of this paper is to summarize the strategies and techniques of communication and public relations applied to non-profit sector.The approach of the paper is to identify the most appropriate strategies and techniques that non-profit sector can use to accomplish its objectives, to highlight specific differences between the strategies and techniques of the profit and non-profit sectors and to identify potential communication and public relations actions in order to increase visibility among target audience, create brand awareness and to change into positive brand sentiment the target perception about the non-profit sector.

  18. Improving skill development: an exploratory study comparing a philosophical and an applied ethical analysis technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-09-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of ICT students and professionals. In particular the skill development focused on includes: being able to recognise ethical challenges and formulate coherent responses; distancing oneself from subjective judgements; developing ethical literacy; identifying stakeholders; and communicating ethical decisions made, to name a few.

  19. Goal-based angular adaptivity applied to the spherical harmonics discretisation of the neutral particle transport equation

    International Nuclear Information System (INIS)

    Highlights: • A variable order spherical harmonics scheme is presented. • An adaptive process is proposed to automatically refine the angular resolution. • A regular error estimator and a goal-based error estimator are presented. • The adaptive methods are applied to fixed source and eigenvalue problems. • Adaptive methods give more accurate solutions than uniform angular resolution. - Abstract: A variable order spherical harmonics scheme has been described and employed for the solution of the neutral particle transport equation. The scheme is specifically described with application within the inner-element sub-grid scale finite element spatial discretisation. The angular resolution is variable across both the spatial and energy dimensions. That is, the order of the spherical harmonic expansion may differ at each node of the mesh for each energy group. The variable order scheme has been used to develop adaptive methods for the angular resolution of the particle transport phase-space. Two types of adaptive method have been developed and applied to examples. The first is regular adaptivity, in which the error in the solution over the entire domain is minimised. The second is goal-based adaptivity, in which the error in a specified functional is minimised. The methods were applied to fixed source and eigenvalue examples. Both methods demonstrate an improved accuracy for a given number of degrees of freedom in the angular discretisation

  20. Adaptations in physiology and propulsion techniques during the initial phase of learning manual wheelchair propulsion

    NARCIS (Netherlands)

    de Groot, S; Veeger, H E J; Hollander, A P; van der Woude, L H V

    2003-01-01

    OBJECTIVE: The purpose of this study was to analyze adaptations in gross mechanical efficiency and wheelchair propulsion technique in novice able-bodied subjects during the initial phase of learning hand-rim wheelchair propulsion. DESIGN: Nine able-bodied subjects performed three 4-min practice bloc

  1. Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling

    NARCIS (Netherlands)

    Houdenhoven, van M.; Oostrum, van J.M.; Hans, E.W.; Wullink, G.; Kazemier, G.

    2013-01-01

    BACKGROUND: An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques.

  2. Applying Modern Techniques and Carrying Out English .Extracurricular—— On the Model United Nations Activity

    Institute of Scientific and Technical Information of China (English)

    XuXiaoyu; WangJian

    2004-01-01

    This paper is an introduction of the extracurricular activity of the Model United Nations in Northwestern Polyteehnical University (NPU) and it focuses on the application of the modem techniques in the activity and the pedagogical theories applied in it. An interview and questionnaire research will reveal the influence of the Model United Nations.

  3. Difficulties applying recent blind source separation techniques to EEG and MEG

    CERN Document Server

    Knuth, Kevin H

    2015-01-01

    High temporal resolution measurements of human brain activity can be performed by recording the electric potentials on the scalp surface (electroencephalography, EEG), or by recording the magnetic fields near the surface of the head (magnetoencephalography, MEG). The analysis of the data is problematic due to the fact that multiple neural generators may be simultaneously active and the potentials and magnetic fields from these sources are superimposed on the detectors. It is highly desirable to un-mix the data into signals representing the behaviors of the original individual generators. This general problem is called blind source separation and several recent techniques utilizing maximum entropy, minimum mutual information, and maximum likelihood estimation have been applied. These techniques have had much success in separating signals such as natural sounds or speech, but appear to be ineffective when applied to EEG or MEG signals. Many of these techniques implicitly assume that the source distributions hav...

  4. How to Apply Student-centered Teaching Techniques in a Large Class%How to Apply Student-centered Teaching Techniques in a Larae Class

    Institute of Scientific and Technical Information of China (English)

    李焱

    2008-01-01

    It is very common to have a class of 50 or more students in Chinese schools,and teaching a foreign language effectively to a large class is really hard work.In order to change the teacher-centered teaching model into the student-centered one,Teachers should keep students' needs,interests,and learning styles in mind,apply several kinds of teaching techniques,organize different clas$1~OIlll activities and encourage,praise and appreciate both students' success and learning process all the time.If teachers place more responsibility in the hands of students,serve as "presenter or facilitator of knowledge"instead of "source of all knowledge",they can greatly motivate students to learn the language in a very active,cooperative andeffectiveway.After all,peoplelearn by doing,not only by watching andlistening.

  5. Assessment of Multi-Joint Coordination and Adaptation in Standing Balance: A Novel Device and System Identification Technique.

    Science.gov (United States)

    Engelhart, Denise; Schouten, Alfred C; Aarts, Ronald G K M; van der Kooij, Herman

    2015-11-01

    The ankles and hips play an important role in maintaining standing balance and the coordination between joints adapts with task and conditions, like the disturbance magnitude and type, and changes with age. Assessment of multi-joint coordination requires the application of multiple continuous and independent disturbances and closed loop system identification techniques (CLSIT). This paper presents a novel device, the double inverted pendulum perturbator (DIPP), which can apply disturbing forces at the hip level and between the shoulder blades. In addition to the disturbances, the device can provide force fields to study adaptation of multi-joint coordination. The performance of the DIPP and a novel CLSIT was assessed by identifying a system with known mechanical properties and model simulations. A double inverted pendulum was successfully identified, while force fields were able to keep the pendulum upright. The estimated dynamics were similar as the theoretical derived dynamics. The DIPP has a sufficient bandwidth of 7 Hz to identify multi-joint coordination dynamics. An experiment with human subjects where a stabilizing force field was rendered at the hip (1500 N/m), showed that subjects adapt by lowering their control actions around the ankles. The stiffness from upper and lower segment motion to ankle torque dropped with 30% and 48%, respectively. Our methods allow to study (pathological) changes in multi-joint coordination as well as adaptive capacity to maintain standing balance. PMID:25423654

  6. Applying stakeholder Delphi techniques for planning sustainable use of aquatic resources

    DEFF Research Database (Denmark)

    Lund, Søren; Banta, Gary Thomas; Bunting, Stuart W

    2015-01-01

    and Vietnam. The purpose of this paper is to give an account of how the stakeholder Delphi method was adapted and applied to support the participatory integrated action planning for sustainable use of aquatic resources facilitated within the HighARCS project. An account of the steps taken and results recorded......The HighARCS (Highland Aquatic Resources Conservation and Sustainable Development) project was a participatory research effort to map and better understand the patterns of resource use and livelihoods of communities who utilize highland aquatic resources in five sites across China, India...

  7. Dual Adaptive Filtering by Optimal Projection Applied to Filter Muscle Artifacts on EEG and Comparative Study

    Directory of Open Access Journals (Sweden)

    Samuel Boudet

    2014-01-01

    Full Text Available Muscle artifacts constitute one of the major problems in electroencephalogram (EEG examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings.

  8. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    Energy Technology Data Exchange (ETDEWEB)

    Goffin, Mark A., E-mail: mark.a.goffin@gmail.com [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, London, SW7 2AZ (United Kingdom); Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C. [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, London, SW7 2AZ (United Kingdom); Smith, Paul N. [ANSWERS Software Service, AMEC, Kimmeridge House, Dorset Green Technology Park, Winfrith Newburgh, Dorchester, Dorset, DT2 8ZB (United Kingdom); Smedley-Stevenson, Richard P. [AWE, Aldermaston, Reading, RG7 4PR (United Kingdom)

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  9. Key techniques and applications of adaptive growth method for stiffener layout design of plates and shells

    Science.gov (United States)

    Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun

    2013-11-01

    The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.

  10. ADAPTATION OF CRACK GROWTH DETECTION TECHNIQUES TO US MATERIAL TEST REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    A. Joseph Palmer; Sebastien P. Teysseyre; Kurt L. Davis; Gordon Kohse; Yakov Ostrovsky; David M. Carpenter; Joy L. Rempe

    2015-04-01

    A key component in evaluating the ability of Light Water Reactors to operate beyond 60 years is characterizing the degradation of materials exposed to radiation and various water chemistries. Of particular concern is the response of reactor materials to Irradiation Assisted Stress Corrosion Cracking (IASCC). Some test reactors outside the United States, such as the Halden Boiling Water Reactor (HBWR), have developed techniques to measure crack growth propagation during irradiation. The basic approach is to use a custom-designed compact loading mechanism to stress the specimen during irradiation, while the crack in the specimen is monitored in-situ using the Direct Current Potential Drop (DCPD) method. In 2012 the US Department of Energy commissioned the Idaho National Laboratory and the MIT Nuclear Reactor Laboratory (MIT NRL) to take the basic concepts developed at the HBWR and adapt them to a test rig capable of conducting in-pile IASCC tests in US Material Test Reactors. The first two and half years of the project consisted of designing and testing the loader mechanism, testing individual components of the in-pile rig and electronic support equipment, and autoclave testing of the rig design prior to insertion in the MIT Reactor. The load was applied to the specimen by means of a scissor like mechanism, actuated by a miniature metal bellows driven by pneumatic pressure and sized to fit within the small in-core irradiation volume. In addition to the loader design, technical challenges included developing robust connections to the specimen for the applied current and voltage measurements, appropriate ceramic insulating materials that can endure the LWR environment, dealing with the high electromagnetic noise environment of a reactor core at full power, and accommodating material property changes in the specimen, due primarily to fast neutron damage, which change the specimen resistance without additional crack growth. The project culminated with an in

  11. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  12. Renormalization techniques applied to the study of density of states in disordered systems

    International Nuclear Information System (INIS)

    A general scheme for real space renormalization of formal scattering theory is presented and applied to the calculation of density of states (DOS) in some finite width systems. This technique is extended in a self-consistent way, to the treatment of disordered and partially ordered chains. Numerical results of moments and DOS are presented in comparison with previous calculations. In addition, a self-consistent theory for the magnetic order problem in a Hubbard chain is derived and a parametric transition is observed. Properties of localization of the electronic states in disordered chains are studied through various decimation averaging techniques and using numerical simulations. (author)

  13. Phase-shifting technique applied to circular harmonic-based joint transform correlator

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The phase-shifting technique is applied to the circular harmonic expansion-based joint transform correlator. Computer simulation has shown that the light efficiency and the discrimination capability are greatly enhanced, and the full rotation invariance is preserved after the phase-shifting technique has been used. A rotation-invariant optical pattern recognition with high discrimination capability and high light efficiency is obtained. The influence of the additive noise on the performance of the correlator is also investigated. However, the anti-noise capability of this kind of correlator still needs improving.

  14. Prediction of radical scavenging activities of anthocyanins applying adaptive neuro-fuzzy inference system (ANFIS) with quantum chemical descriptors.

    Science.gov (United States)

    Jhin, Changho; Hwang, Keum Taek

    2014-01-01

    Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627

  15. An adaptive SK technique and its application for fault detection of rolling element bearings

    Science.gov (United States)

    Wang, Yanxue; Liang, Ming

    2011-07-01

    In this paper, we propose an adaptive spectral kurtosis (SK) technique for the fault detection of rolling element bearings. The primary contribution is adaptive determination of the bandwidth and center frequency. This is implemented with successive attempts to right-expand a given window along the frequency axis by merging it with its subsequent neighboring windows. Influence of the parameters such as the initial window function, bandwidth and window overlap on the merged windows as well as how to choose those parameters in practical applications are explored. Based on simulated experiments, it can be found that the proposed technique can further enhance the SK-based method as compared to the kurtogram approach. The effectiveness of the proposed method in fault detection of the rolling element bearings is validated using experimental signals.

  16. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    OpenAIRE

    Zhiwei Ye; Mingwei Wang; Zhengbing Hu; Wei Liu

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three fa...

  17. Adaptive Fuzzy Output-Feedback Method Applied to Fin Control for Time-Delay Ship Roll Stabilization

    Directory of Open Access Journals (Sweden)

    Rui Bai

    2014-01-01

    Full Text Available The ship roll stabilization by fin control system is considered in this paper. Assuming that angular velocity in roll cannot be measured, an adaptive fuzzy output-feedback control is investigated. The fuzzy logic system is used to approximate the uncertain term of the controlled system, and a fuzzy state observer is designed to estimate the unmeasured states. By utilizing the fuzzy state observer and combining the adaptive backstepping technique with adaptive fuzzy control design, an observer-based adaptive fuzzy output-feedback control approach is developed. It is proved that the proposed control approach can guarantee that all the signals in the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB, and the control strategy is effective to decrease the roll motion. Simulation results are included to illustrate the effectiveness of the proposed approach.

  18. A Novel Approach of Harris Corner Detection of Noisy Images using Adaptive Wavelet Thresholding Technique

    OpenAIRE

    Dey, Nilanjan; Nandi, Pradipti; Barman, Nilanjana

    2012-01-01

    In this paper we propose a method of corner detection for obtaining features which is required to track and recognize objects within a noisy image. Corner detection of noisy images is a challenging task in image processing. Natural images often get corrupted by noise during acquisition and transmission. Though Corner detection of these noisy images does not provide desired results, hence de-noising is required. Adaptive wavelet thresholding approach is applied for the same.

  19. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    Science.gov (United States)

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  20. Sustainable Modular Adaptive Redundancy Technique Emphasizing Partial Reconfiguration for Reduced Power Consumption

    Directory of Open Access Journals (Sweden)

    R. Al-Haddad

    2011-01-01

    Full Text Available As reconfigurable devices' capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. Organic computing paradigms have been proposed for fault-tolerant systems because they promote behaviors that allow complex digital systems to adapt and survive in demanding environments. In this paper, we develop a sustainable modular adaptive redundancy technique (SMART composed of a two-layered organic system. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA to provide self-repair using a novel approach called reconfigurable adaptive redundancy system (RARS. The software layer supervises the organic activities on the FPGA and extends the self-healing capabilities through application-independent, intrinsic, and evolutionary repair techniques that leverage the benefits of dynamic partial reconfiguration (PR. SMART was evaluated using a Sobel edge-detection application and was shown to tolerate stressful sequences of injected transient and permanent faults while reducing dynamic power consumption by 30% compared to conventional triple modular redundancy (TMR techniques, with nominal impact on the fault-tolerance capabilities. Moreover, PR is employed to keep the system on line while under repair and also to reduce repair time. Experiments have shown a 27.48% decrease in repair time when PR is employed compared to the full bitstream configuration case.

  1. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement.

    Science.gov (United States)

    Lin, Hui; Gao, Jian; Mei, Qing; He, Yunbo; Liu, Junxiu; Wang, Xingjin

    2016-04-01

    It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation. PMID:27137056

  2. Comparison between different techniques applied to quartz CPO determination in granitoid mylonites

    Science.gov (United States)

    Fazio, Eugenio; Punturo, Rosalda; Cirrincione, Rosolino; Kern, Hartmut; Wenk, Hans-Rudolph; Pezzino, Antonino; Goswami, Shalini; Mamtani, Manish

    2016-04-01

    Since the second half of the last century, several techniques have been adopted to resolve the crystallographic preferred orientation (CPO) of major minerals constituting crustal and mantle rocks. To this aim, many efforts have been made to increase the accuracy of such analytical devices as well as to progressively reduce the time needed to perform microstructural analysis. It is worth noting that many of these microstructural studies deal with quartz CPO because of the wide occurrence of this mineral phase in crustal rocks as well as its quite simple chemical composition. In the present work, four different techniques were applied to define CPOs of dynamically recrystallized quartz domains from naturally deformed rocks collected from a ductile crustal scale shear zone in order to compare their advantages and limitation. The selected Alpine shear zone is located in the Aspromonte Massif (Calabrian Peloritani Orogen, southern Italy) representing granitoid lithotypes. The adopted methods span from "classical" universal stage (US), to image analysis technique (CIP), electron back-scattered diffraction (EBSD), and time of flight neutron diffraction (TOF). When compared, bulk texture pole figures obtained by means of these different techniques show a good correlation. Advances in analytical techniques used for microstructural investigations are outlined by discussing results of quartz CPO that are presented in this study.

  3. Wire-mesh and ultrasound techniques applied for the characterization of gas-liquid slug flow

    Energy Technology Data Exchange (ETDEWEB)

    Ofuchi, Cesar Y.; Sieczkowski, Wytila Chagas; Neves Junior, Flavio; Arruda, Lucia V.R.; Morales, Rigoberto E.M.; Amaral, Carlos E.F.; Silva, Marco J. da [Federal University of Technology of Parana, Curitiba, PR (Brazil)], e-mails: ofuchi@utfpr.edu.br, wytila@utfpr.edu.br, neves@utfpr.edu.br, lvrarruda@utfpr.edu.br, rmorales@utfpr.edu.br, camaral@utfpr.edu.br, mdasilva@utfpr.edu.br

    2010-07-01

    Gas-liquid two-phase flows are found in a broad range of industrial applications, such as chemical, petrochemical and nuclear industries and quite often determine the efficiency and safety of process and plants. Several experimental techniques have been proposed and applied to measure and quantify two-phase flows so far. In this experimental study the wire-mesh sensor and an ultrasound technique are used and comparatively evaluated to study two-phase slug flows in horizontal pipes. The wire-mesh is an imaging technique and thus appropriated for scientific studies while ultrasound-based technique is robust and non-intrusive and hence well suited for industrial applications. Based on the measured raw data it is possible to extract some specific slug flow parameters of interest such as mean void fraction and characteristic frequency. The experiments were performed in the Thermal Sciences Laboratory (LACIT) at UTFPR, Brazil, in which an experimental two-phase flow loop is available. The experimental flow loop comprises a horizontal acrylic pipe of 26 mm diameter and 9 m length. Water and air were used to produce the two phase flow under controlled conditions. The results show good agreement between the techniques. (author)

  4. GPU peer-to-peer techniques applied to a cluster interconnect

    CERN Document Server

    Ammendola, Roberto; Biagioni, Andrea; Bisson, Mauro; Fatica, Massimiliano; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Mastrostefano, Enrico; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero

    2013-01-01

    Modern GPUs support special protocols to exchange data directly across the PCI Express bus. While these protocols could be used to reduce GPU data transmission times, basically by avoiding staging to host memory, they require specific hardware features which are not available on current generation network adapters. In this paper we describe the architectural modifications required to implement peer-to-peer access to NVIDIA Fermi- and Kepler-class GPUs on an FPGA-based cluster interconnect. Besides, the current software implementation, which integrates this feature by minimally extending the RDMA programming model, is discussed, as well as some issues raised while employing it in a higher level API like MPI. Finally, the current limits of the technique are studied by analyzing the performance improvements on low-level benchmarks and on two GPU-accelerated applications, showing when and how they seem to benefit from the GPU peer-to-peer method.

  5. Pulsed laser deposition: the road to hybrid nanocomposites coatings and novel pulsed laser adaptive technique.

    Science.gov (United States)

    Serbezov, Valery

    2013-01-01

    The applications of Pulsed Laser Deposition (PLD) for producing nanoparticles, nanostructures and nanocomposites coatings based on recently developed laser ablating techniques and their convergence are being reviewed. The problems of in situ synthesis of hybrid inorganic-organic nanocomposites coatings by these techniques are being discussed. The novel modification of PLD called Pulsed Laser Adaptive Deposition (PLAD) technique is presented. The in situ synthesized inorganic/organic nanocomposites coatings from Magnesium (Mg) alloy/Rhodamine B and Mg alloy/ Desoximetasone by PLAD are described. The trends, applications and future development of discussed patented methods based on the laser ablating technologies for producing hybrid nanocomposite coatings have also been discussed in this review. PMID:22747717

  6. Application of adaptive and neural network computational techniques to Traffic Volume and Classification Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Fisher, H.N.; Jones, R.D.; Bisset, K.R.; Lee, L.A.

    1993-09-01

    We are developing a Traffic Volume and Classification Monitoring (TVCM) system based on adaptive and neural network computational techniques. The value of neutral networks in this application lies in their ability to learn from data and to form a mapping of arbitrary topology. The piezoelectric strip and magnetic loop sensors typically used for TVCM provide signals that are complicated and variable, and that correspond in indirect ways with the desired FWHA 13-class classification system. Further, the wide variety of vehicle configurations adds to the complexity of the classification task. Our goal is to provide a TVCM system featuring high accuracy, adaptability to wide sensor and envirorunental variations, and continuous fault detection. We have instrumented an experimental TVCM site, developed PC-based on-line data acquisition software, collected a large database of vehicles` signals together with accurate ground truth determination, and analyzed the data off-line with a neural net classification system that can distinguish between class 2 (automobiles) and class 3 (utility vehicles) with better than 90% accuracy. The neural network used, called the Connectionist Hyperprism Classification (CHC) network, features simple basis functions; rapid, linear training algorithms for basis function amplitudes and widths; and basis function elimination that enhances network speed and accuracy. Work is in progress to extend the system to other classes, to quantify the system`s adaptability, and to develop automatic fault detection techniques.

  7. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  8. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  9. Wavelets, Curvelets and Multiresolution Analysis Techniques Applied to Implosion Symmetry Characterization of ICF Targets

    CERN Document Server

    Afeyan, Bedros; Starck, Jean Luc; Cuneo, Michael

    2012-01-01

    We introduce wavelets, curvelets and multiresolution analysis techniques to assess the symmetry of X ray driven imploding shells in ICF targets. After denoising X ray backlighting produced images, we determine the Shell Thickness Averaged Radius (STAR) of maximum density, r*(N, {\\theta}), where N is the percentage of the shell thickness over which to average. The non-uniformities of r*(N, {\\theta}) are quantified by a Legendre polynomial decomposition in angle, {\\theta}. Undecimated wavelet decompositions outperform decimated ones in denoising and both are surpassed by the curvelet transform. In each case, hard thresholding based on noise modeling is used. We have also applied combined wavelet and curvelet filter techniques with variational minimization as a way to select the significant coefficients. Gains are minimal over curvelets alone in the images we have analyzed.

  10. Guidelines for depth data collection in rivers when applying interpolation techniques (kriging for river restoration

    Directory of Open Access Journals (Sweden)

    M. Rivas-Casado

    2007-05-01

    Full Text Available River restoration appraisal requires the implementation of monitoring programmes that assess the river site before and after the restoration project. However, little work has yet been developed to design effective and efficient sampling strategies. Three main variables need to be considered when designing monitoring programmes: space, time and scale. The aim of this paper is to describe the methodology applied to analyse the variation of depth in space, scale and time so more comprehensive monitoring programmes can be developed. Geostatistical techniques were applied to study the spatial dimension (sampling strategy and density, spectral analysis was used to study the scale at which depth shows cyclic patterns, whilst descriptive statistics were used to assess the temporal variation. A brief set of guidelines have been summarised in the conclusion.

  11. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  12. Applying traditional signal processing techniques to social media exploitation for situational understanding

    Science.gov (United States)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  13. Grid-based Moment Tensor Inversion Technique Apply for Earthquakes Offshore of Northeast Taiwan

    Science.gov (United States)

    Cheng, H.; Lee, S.; Ma, K.

    2010-12-01

    We use a grid-based moment tensor inversion technique and broadband continuous recordings to real-time monitoring the earthquakes offshore northeast Taiwan. The moment tensor inversion technique and a grid search scheme are applied to obtain the information of source parameters, including the hypocenter, moment magnitude, and focal mechanism. In Taiwan, the routine moment tensor solutions are reported by CWB(Central Weather Bureau) and BATS(Broadband Array in Taiwan for Seismology) which both require some lag time for the information on event time and location before doing CMT(Centroid Moment Tensor) analysis. By using the Grid-based moment tensor inversion technique, the event location and focal mechanism could be obtained simultaneously within about two minutes after the occurrence of the earthquake. This inversion procedure is based on a 1-D Green’s functions database calculated by frequency-wavenumber(fk) method. The northeast offshore of Taiwan has been taken into account as our first test area which covers the region of 121.5E to 123E, 23.5N to 25N, and the depth to 136 km. A 3D grid system is set in this study area with average grid size of 10 x 10 x 10 km3. We compare our results with the past earthquakes from 2008 to 2010 which had analyzed by BATS CMT. We also compare the event time detected by GridMT with the CWB earthquake reports. The results indicate that the grid-based moment tensor inversion system is efficient and realizable to be applied real-time on monitoring the local seismic activity. Our long-term goal is to use the GridMT technique with fully 3-D Green’s functions for the whole Taiwan in the future.

  14. ADAPTING E-COURSES USING DATA MINING TECHNIQUES - PDCA APPROACH AND QUALITY SPIRAL

    Directory of Open Access Journals (Sweden)

    Marija Blagojevic

    2013-09-01

    Full Text Available This paper presents an approach to adapting e-courses based on original PDCA (Plan, Do, Check , Act platform and quality spiral. An algorithm for the adaptation of e-courses was proposed and implemented into the Moodle Learning Management System at the Faculty of Technical Sciences, Cacak. The approach is primarily based on improving LMS (Learning Management Systems or e-learning systems through modifying the electronic structure of the courses by predicting the behaviour patterns of the users. The prediction of user behaviour patterns was done using data mining techniques. Future research will focus on modelling of excellence of continuous advancement of the original system based on the evaluation results carried out at the end of each PDCA cycle. Additionally, future work will aim at evaluating the effects of the system based on the achievements and positive feedback of the users.

  15. A spectral identification technique for adaptive attitude control and pointing of the Space Telescope

    Science.gov (United States)

    Teuber, D. L.

    1976-01-01

    The Space Telescope is a 2.4 m class aperture optical telescope having near-diffraction-limited performance. It will be placed into earth orbit by 1980 via the Space Shuttle. The problem considered is how to achieve negligible degradation of the astronomy imaging capability (to 0.005 arc second) due to smearing by pointing motions during observations. Initially, pointing instability sources were identified and a linear stability was used to assess the magnitude of elastic body modes and to design control system compensation regions necessary for subsequent adaptive control. A spectral identification technique for this adaptive attitude control and pointing has been investigated that will alleviate requirements for comprehensive dynamic ground testing. Typical all-digital simulation results describing motions of the telescope line of sight are presented.

  16. The block adaptive multigrid method applied to the solution of the Euler equations

    Science.gov (United States)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  17. The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.

    Science.gov (United States)

    Hachaj, Tomasz; Ogiela, Marek R

    2016-06-01

    The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment. PMID:27106581

  18. The Service Laboratory - A GTZ-BgVV project: Health protection through adapted veterinary diagnostic techniques

    International Nuclear Information System (INIS)

    The customary diagnostic methods of today have been developed in industrialized countries. High costs for personnel resulted in a trend towards automation and prefabricated test kits. Consequently, these techniques are not sufficiently adapted to local conditions in developing countries, where, as a rule, skilled and ancillary staff is available whereas foreign currency reserves for purchasing laboratory equipment and material from abroad are rather limited. Furthermore, the training of personnel from developing countries has usually been oriented towards thenon-transferable standards and methods of industrialized countries. This leads to a long term dependence of the diagnostic services on external funding. A diagnostic technology adapted to the specific local conditions of developing countries is needed to overcome this situation. The project activities concentrate on serological diagnostic work. Here, basic knowledge of the common diagnostic techniques and their set-up for specific diseases, methods for the production of related reagents (antigens, antibodies, conjugates, complement, etc.) and cleaning procedures for the reuse of 'one way' plastic material is spread by training programmes, specific publications and information leaflets. For two of the more complex test procedures, the most frequently quoted prescribed test for international trade, CFT, and the increasingly important ELISA (OIE, Manual of Standards for Diagnostic Techniques, Paris, 1992), we have calculated the cost reduction potential of adaptation through self-production of reagents and reuse of plastic materials. Material costs per microtitre test plate for the diagnosis of brucellosis can be reduced from US $3.79 to 0.82 for CFT and from US $3.88 to 1.13 for ELISA. In comparison, commercial ELISA kits cost about US $80 to 90 per plate (e.g. Bommeli, IDEXX, Boehringer)

  19. Action research for climate change adaptation : Developing and applying knowledge for governance

    NARCIS (Netherlands)

    Buuren, van A.; Eshuis, J.; Vliet, van M.

    2015-01-01

    Governments all over the world are struggling with the question of how to adapt to climate change. They need information not only about the issue and its possible consequences, but also about feasible governance strategies and instruments to combat it. At the same time, scientists from different soc

  20. Adaptive bone-remodeling theory applied to prosthetic-design analysis

    NARCIS (Netherlands)

    R. Huiskes (Rik); H.H. Weinans (Harrie); H.J. Grootenboer; M. Dalstra; B. Fudala; T.J. Slooff

    1987-01-01

    textabstractThe subject of this article is the development and application of computer-simulation methods to predict stress-related adaptive bone remodeling, in accordance with 'Wolff's Law'. These models are based on the Finite Element Method (FEM) in combination with numerical formulations of adap

  1. Adaptive MIMO Fuzzy Compensate Fuzzy Sliding Mode Algorithm: Applied to Second Order Nonlinear System

    Directory of Open Access Journals (Sweden)

    Farzin Piltan, N. Sulaiman, Payman Ferdosali, Mehdi Rashidi, Zahra Tajpeikar

    2011-12-01

    Full Text Available This research is focused on proposed adaptive fuzzy sliding mode algorithms with the adaptation lawsderived in the Lyapunov sense. The stability of the closed-loop system is proved mathematically based onthe Lyapunov method. Adaptive MIMO fuzzy compensate fuzzy sliding mode method design a MIMO fuzzysystem to compensate for the model uncertainties of the system, and chattering also solved by linearsaturation method. Since there is no tuning method to adjust the premise part of fuzzy rules so wepresented a scheme to online tune consequence part of fuzzy rules. Classical sliding mode control isrobust to control model uncertainties and external disturbances. A sliding mode method with a switchingcontrol low guarantees the stability of the certain and/or uncertain system, but the addition of the switchingcontrol low introduces chattering into the system. One way to reduce or eliminate chattering is to insert aboundary layer method inside of a boundary layer around the sliding surface. Classical sliding modecontrol method has difficulty in handling unstructured model uncertainties. One can overcome this problemby combining a sliding mode controller and artificial intelligence (e.g. fuzzy logic. To approximate a timevaryingnonlinear dynamic system, a fuzzy system requires a large amount of fuzzy rule base. This largenumber of fuzzy rules will cause a high computation load. The addition of an adaptive law to a fuzzy slidingmode controller to online tune the parameters of the fuzzy rules in use will ensure a moderatecomputational load. The adaptive laws in this algorithm are designed based on the Lyapunov stabilitytheorem. Asymptotic stability of the closed loop system is also proved in the sense of Lyapunov.

  2. A comparison of software- and hardware-gating techniques applied to near-field antenna measurements

    Directory of Open Access Journals (Sweden)

    M. M. Leibfritz

    2007-06-01

    Full Text Available It is well-known that antenna measurements are error prone with respect to reflections within an antenna measurements test facility. The influence on near-field (NF measurements with subsequent NF to far-field (FF transformation can be significantly reduced applying soft- or hard-gating techniques. Hard-gating systems are often used in compact range facilities employing fast PIN-diode switches (Hartmann, 2000 whereas soft-gating systems utilize a network analyzer to gather frequency samples and eliminate objectionable distortions in the time-domain by means of Fourier-transformation techniques. Near-field (NF antenna measurements are known to be sensitive to various errors concerning the measurement setup as there have to be mentioned the accuracy of the positioner, the measurement instruments or the quality of the anechoic chamber itself. Two different approaches employing soft- and hard-gating techniques are discussed with respect to practical applications. Signal generation for the antenna under test (AUT is implemented using a newly developed hard-gating system based on digital signal synthesis allowing gate-widths of 250 ps to 10 ns. Measurement results obtained from a Yagi-Uda antenna under test (AUT and a dual polarized open-ended waveguide used as probe antenna are presented for the GSM 1800 frequency range.

  3. Downscaling Statistical Model Techniques for Climate Change Analysis Applied to the Amazon Region

    Directory of Open Access Journals (Sweden)

    David Mendes

    2014-01-01

    Full Text Available The Amazon is an area covered predominantly by dense tropical rainforest with relatively small inclusions of several other types of vegetation. In the last decades, scientific research has suggested a strong link between the health of the Amazon and the integrity of the global climate: tropical forests and woodlands (e.g., savannas exchange vast amounts of water and energy with the atmosphere and are thought to be important in controlling local and regional climates. Consider the importance of the Amazon biome to the global climate changes impacts and the role of the protected area in the conservation of biodiversity and state-of-art of downscaling model techniques based on ANN Calibrate and run a downscaling model technique based on the Artificial Neural Network (ANN that is applied to the Amazon region in order to obtain regional and local climate predicted data (e.g., precipitation. Considering the importance of the Amazon biome to the global climate changes impacts and the state-of-art of downscaling techniques for climate models, the shower of this work is presented as follows: the use of ANNs good similarity with the observation in the cities of Belém and Manaus, with correlations of approximately 88.9% and 91.3%, respectively, and spatial distribution, especially in the correction process, representing a good fit.

  4. Applied research on air pollution using nuclear-related analytical techniques

    International Nuclear Information System (INIS)

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which will run from 1992-1996, and will build upon the experience gained by the Agency from the laboratory support that it has been providing for several years to BAPMoN - the Background Air Pollution Monitoring Network programme organized under the auspices of the World Meterological Organization. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XFR, and PIXE for the analysis of toxic and other trace elements in suspended particulate matter (including air filter samples), rainwater and fog-water samples, and in biological indicators of air pollution (e.g. lichens and mosses). The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for practically-oriented research and monitoring studies on air pollution ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural areas). This document reports the discussions held during the first Research Co-ordination Meeting (RCM) for the CRP which took place at the IAEA Headquarters in Vienna. Refs, figs and tabs

  5. Photothermal Techniques Applied to the Thermal Characterization of l-Cysteine Nanofluids

    Science.gov (United States)

    Alvarado, E. Maldonado; Ramón-Gallegos, E.; Jiménez Pérez, J. L.; Cruz-Orea, A.; Hernández Rosas, J.

    2013-05-01

    Thermal-diffusivity ( D) and thermal-effusivity ( e) measurements were carried out in l-cysteine nanoliquids l-cysteine in combination with Au nanoparticles and protoporphyrin IX (PpIX) nanofluid) by using thermal lens spectrometry (TLS) and photopyroelectric (PPE) techniques. The TLS technique was used in the two mismatched mode experimental configuration to obtain the thermal-diffusivity of the samples. On the other hand, the sample thermal effusivity ( e) was obtained by using the PPE technique where the temperature variation of a sample, exposed to modulated radiation, is measured with a pyrolectric sensor. From the obtained thermal-diffusivity and thermal-effusivity values, the thermal conductivity and specific heat capacity of the sample were calculated. The obtained thermal parameters were compared with the thermal parameters of water. The results of this study could be applied to the detection of tumors by using the l-cysteine in combination with Au nanoparticles and PpIX nanofluid, called conjugated in this study.

  6. Adaptively Reevaluated Bayesian Localization (ARBL): A novel technique for radiological source localization

    International Nuclear Information System (INIS)

    We present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. This technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry of response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search

  7. Adaptively Reevaluated Bayesian Localization (ARBL): A novel technique for radiological source localization

    Science.gov (United States)

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.; McCall, Jonathon D.; Prinke, Amanda M.; Webster, Jennifer B.; Seifert, Carolyn E.

    2015-06-01

    We present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. This technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry of response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search.

  8. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids

    OpenAIRE

    Changho Jhin; Keum Taek Hwang

    2015-01-01

    One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were...

  9. Adaptive Finite Element Modeling Techniques for the Poisson-Boltzmann Equation

    CERN Document Server

    Holst, Michael; Yu, Zeyun; Zhou, Yongcheng; Zhu, Yunrong

    2010-01-01

    We develop an efficient and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the regularization technique of Chen, Holst, and Xu; this technique made possible the first a priori pointwise estimates and the first complete solution and approximation theory for the Poisson-Boltzmann equation. It also made possible the first provably convergent discretization of the PBE, and allowed for the development of a provably convergent AFEM for the PBE. However, in practice the regularization turns out to be numerically ill-conditioned. In this article, we examine a second regularization, and establish a number of basic results to ensure that the new approach produces the same mathematical advantages of the original regularization, without the ill-conditioning property. We then design an AFEM scheme based on the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This res...

  10. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    Science.gov (United States)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  11. The Subarray MVDR Beamformer: A Space-Time Adaptive Processor Applied to Active Sonar

    Science.gov (United States)

    Bezanson, Leverett Guidroz

    The research for this thesis was mainly performed at the NATO Underwater Research Center, now named the Center for Maritime Research and Experimentation (CMRE). The purpose of the research was to improve the detection of underwater targets in the littoral ocean when using active sonar. Currently these detections are being made by towed line arrays using a delay and sum beamformer for bearing measurements and noise suppression. This method of beamforming has can suffer from reverberation that commonly is present in the littoral environment. A proposed solution is to use an adaptive beamformer which can attenuate reverberation and increase the bearing resolution. The adaptive beamforming algorithms have existed for a long time and typically are not used in the active case due to limited amount of observable data that is needed for adaptation. This deficiency is caused by the conflicting requirements for high Doppler resolution for target detection and small time windows for building up full-rank covariance estimates. The algorithms also are sensitive to bearing estimate errors that commonly occur in active sonar systems. Recently it has been proposed to overcome these limitations through the use of reduced beamspace adaptive beamforming. The Subarray MVDR beamformer is analyzed, both against simulated data and against experimental data collected by CMRE during the GLINT/NGAS11 experiment in 2011. Simulation results indicate that the Subarray MVDR beamformer rejects interfering signals that are not effectively attenuated by conventional beamforming. The application of the Subarray MVDR beamformer to the experimental data shows that the Doppler spread of the reverberation ridge is reduced, and the bearing resolution improved. The signal to noise ratio is calculated at the target location and also shows improvement. These calculated and observed performance metrics indicate an improvement of detection in reverberation noise.

  12. Applying Agile Requirements Engineering Approach for Re-engineering & Changes in existing Brownfield Adaptive Systems

    OpenAIRE

    Masood, Abdullah; Ali, M. Asim

    2014-01-01

    Requirements Engineering (RE) is a key activity in the development of software systems and is concerned with the identification of the goals of stakeholders and their elaboration into precise statements of desired services and behavior. The research describes an Agile Requirements Engineering approach for re-engineering & changes in existing Brownfield adaptive system. The approach has few modifications that can be used as a part of SCRUM development process for re-engineering & changes. The ...

  13. Applying computer adaptive testing to optimize online assessment of suicidal behavior: a simulation study.

    OpenAIRE

    Beurs, D.P. de; Vries, A.L.M. de; Groot, M.H. de; Keijser, J. de; Kerkhof, A.J.F.M.

    2014-01-01

    Background The Internet is used increasingly for both suicide research and prevention. To optimize online assessment of suicidal patients, there is a need for short, good-quality tools to assess elevated risk of future suicidal behavior. Computer adaptive testing (CAT) can be used to reduce response burden and improve accuracy, and make the available pencil-and-paper tools more appropriate for online administration. Objective The aim was to test whether an item response–based computer adaptiv...

  14. Data smoothing techniques applied to proton microprobe scans of teleost hard parts

    International Nuclear Information System (INIS)

    We use a proton microprobe to examine the distribution of elements in otoliths and scales of teleost (bony) fish. The elements of principal interest are calcium and strontium in otoliths and calcium and fluorine in scales. Changes in the distribution of these elements across hard structures may allow inferences about the life histories of fish. Otoliths and scales of interest are up to a centimeter in linear dimension and to reveal the structures of interest up to 200 sampling points are required in each dimension. The time needed to accumulate high X-ray counts at each sampling point can be large, particularly for strontium. To reduce microprobe usage we use data smoothing techniques to reveal changing patterns with modest X-ray count accumulations at individual data points. In this paper we review performance for revealing pattern at modest levels of X-ray count accumulations of a selection of digital filters (moving average smoothers), running median filters, robust locally weighted regression filters and adaptive spline filters. (author)

  15. Adaptive Neural Control of Pure-Feedback Nonlinear Time-Delay Systems via Dynamic Surface Technique.

    Science.gov (United States)

    Min Wang; Xiaoping Liu; Peng Shi

    2011-12-01

    This paper is concerned with robust stabilization problem for a class of nonaffine pure-feedback systems with unknown time-delay functions and perturbed uncertainties. Novel continuous packaged functions are introduced in advance to remove unknown nonlinear terms deduced from perturbed uncertainties and unknown time-delay functions, which avoids the functions with control law to be approximated by radial basis function (RBF) neural networks. This technique combining implicit function and mean value theorems overcomes the difficulty in controlling the nonaffine pure-feedback systems. Dynamic surface control (DSC) is used to avoid "the explosion of complexity" in the backstepping design. Design difficulties from unknown time-delay functions are overcome using the function separation technique, the Lyapunov-Krasovskii functionals, and the desirable property of hyperbolic tangent functions. RBF neural networks are employed to approximate desired virtual controls and desired practical control. Under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced significantly, and semiglobal uniform ultimate boundedness of all of the signals in the closed-loop system is guaranteed. Simulation studies are given to demonstrate the effectiveness of the proposed design scheme.

  16. New Region Growing based on Thresholding Technique Applied to MRI Data

    Directory of Open Access Journals (Sweden)

    A. Afifi

    2015-06-01

    Full Text Available This paper proposes an optimal region growing threshold for the segmentation of magnetic resonance images (MRIs. The proposed algorithm combines local search procedure with thresholding region growing to achieve better generic seeds and optimal thresholds for region growing method. A procedure is used to detect the best possible seeds from a set of data distributed all over the image as a high accumulator of the histogram. The output seeds are fed to the local search algorithm to extract the best seeds around initial seeds. Optimal thresholds are used to overcome the limitations of region growing algorithm and to select the pixels sequentially in a random walk starting at the seed point. The proposed algorithm works automatically without any predefined parameters. The proposed algorithm is applied to the challenging application "gray matter/white matter" segmentation datasets. The experimental results compared with other segmentation techniques show that the proposed algorithm produces more accurate and stable results.

  17. Operations research techniques applied to service center logistics in power distribution users

    Directory of Open Access Journals (Sweden)

    Maria Teresinha Arns Steiner

    2006-12-01

    Full Text Available This paper deals with the optimization for the logistics regarding services demanded byusers of power distribution lines, served by the Portão office, located in Curitiba, PR, Brazil,and operated by COPEL (Paranaense Power Company. Through the use of OperationsResearch techniques, an Integer Programming Mathematical model and Floyd Algorithm, amethod was defined to determine in an optimized way, the number of teams needed by theselected office, as well as, the optimized assignment for the teams to the sites in need, inorder to offer efficient services to the users and, besides that, the immediate execution onemergencies and, as to the other services, accordingly to parameters set by the NationalPower Agency together with COPEL. The methodology hereby presented is generic, so thatit could be applied to any power network (or any of its lines, and it has presented verysatisfactory results to the case in analysis.

  18. A systematic review of applying modern software engineering techniques to developing robotic systems

    Directory of Open Access Journals (Sweden)

    Claudia Pons

    2012-04-01

    Full Text Available Robots have become collaborators in our daily life. While robotic systems become more and more complex, the need to engineer their software development grows as well. The traditional approaches used in developing these software systems are reaching their limits; currently used methodologies and tools fall short of addressing the needs of such complex software development. Separating robotics’ knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. This paper presents a systematic review (SLR of the current use of modern software engineering techniques for developing robotic software systems and their actual automation level. The survey was aimed at summarizing existing evidence concerning applying such technologies to the field of robotic systems to identify any gaps in current research to suggest areas for further investigation and provide a background for positioning new research activities.

  19. Electron Correlation Microscopy: A New Technique for Studying Local Atom Dynamics Applied to a Supercooled Liquid.

    Science.gov (United States)

    He, Li; Zhang, Pei; Besser, Matthew F; Kramer, Matthew Joseph; Voyles, Paul M

    2015-08-01

    Electron correlation microscopy (ECM) is a new technique that utilizes time-resolved coherent electron nanodiffraction to study dynamic atomic rearrangements in materials. It is the electron scattering equivalent of photon correlation spectroscopy with the added advantage of nanometer-scale spatial resolution. We have applied ECM to a Pd40Ni40P20 metallic glass, heated inside a scanning transmission electron microscope into a supercooled liquid to measure the structural relaxation time τ between the glass transition temperature T g and the crystallization temperature, T x . τ determined from the mean diffraction intensity autocorrelation function g 2(t) decreases with temperature following an Arrhenius relationship between T g and T g +25 K, and then increases as temperature approaches T x . The distribution of τ determined from the g 2(t) of single speckles is broad and changes significantly with temperature.

  20. Solar coronal magnetic fields derived using seismology techniques applied to omnipresent sunspot waves

    CERN Document Server

    Jess, D B; Ryans, R S I; Christian, D J; Keys, P H; Mathioudakis, M; Mackay, D H; Prasad, S Krishna; Banerjee, D; Grant, S D T; Yau, S; Diamond, C

    2016-01-01

    Sunspots on the surface of the Sun are the observational signatures of intense manifestations of tightly packed magnetic field lines, with near-vertical field strengths exceeding 6,000 G in extreme cases. It is well accepted that both the plasma density and the magnitude of the magnetic field strength decrease rapidly away from the solar surface, making high-cadence coronal measurements through traditional Zeeman and Hanle effects difficult since the observational signatures are fraught with low-amplitude signals that can become swamped with instrumental noise. Magneto-hydrodynamic (MHD) techniques have previously been applied to coronal structures, with single and spatially isolated magnetic field strengths estimated as 9-55 G. A drawback with previous MHD approaches is that they rely on particular wave modes alongside the detectability of harmonic overtones. Here we show, for the first time, how omnipresent magneto-acoustic waves, originating from within the underlying sunspot and propagating radially outwa...

  1. Applying Multi-Criteria Decision-Making Techniques to Prioritize Agility Drivers

    Directory of Open Access Journals (Sweden)

    Ahmad Jafarnejad

    2013-07-01

    Full Text Available It seems that to recognize and classify the factors affecting organizational agility and need to specify the amount of their importance for the organization is essential to preserve survival and success in today's environment. This paper reviews the concept of agility and its division in the following indicators included the factors of motivations organizational agility that have been ranked in terms of level of importance and their influence by the techniques of MCDM. The inner complexity, suppliers, competition, customer needs, market, technology and social factors are the most important factors affecting organizational agility that can evaluate the following indicators and apply them and re-engineering processes, reviews and predictions of customer needs and better understanding of competitive environment and supply chain specify organizational agility and success ultimately.

  2. A comparative assessment of texture analysis techniques applied to bone tool use-wear

    Science.gov (United States)

    Watson, Adam S.; Gleason, Matthew A.

    2016-06-01

    The study of bone tools, a specific class of artifacts often essential to perishable craft production, provides insight into industries otherwise largely invisible archaeologically. Building on recent breakthroughs in the analysis of microwear, this research applies confocal laser scanning microscopy and texture analysis techniques drawn from the field of surface metrology to identify use-wear patterns on experimental and archaeological bone artifacts. Our approach utilizes both conventional parameters and multi-scale geometric characterizations of the areas of worn surfaces to identify statistical similarities as a function of scale. The introduction of this quantitative approach to the study of microtopography holds significant potential for advancement in use-wear studies by reducing inter-observer variability and identifying new parameters useful in the detection of differential wear-patterns.

  3. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  4. How Can Synchrotron Radiation Techniques Be Applied for Detecting Microstructures in Amorphous Alloys?

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available In this work, how synchrotron radiation techniques can be applied for detecting the microstructure in metallic glass (MG is studied. The unit cells are the basic structural units in crystals, though it has been suggested that the co-existence of various clusters may be the universal structural feature in MG. Therefore, it is a challenge to detect microstructures of MG even at the short-range scale by directly using synchrotron radiation techniques, such as X-ray diffraction and X-ray absorption methods. Here, a feasible scheme is developed where some state-of-the-art synchrotron radiation-based experiments can be combined with simulations to investigate the microstructure in MG. By studying a typical MG composition (Zr70Pd30, it is found that various clusters do co-exist in its microstructure, and icosahedral-like clusters are the popular structural units. This is the structural origin where there is precipitation of an icosahedral quasicrystalline phase prior to phase transformation from glass to crystal when heating Zr70Pd30 MG.

  5. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Directory of Open Access Journals (Sweden)

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  6. Personnel contamination protection techniques applied during the TMI-2 [Three Mile Island Unit 2] cleanup

    International Nuclear Information System (INIS)

    The severe damage to the Three Mile Island Unit 2 (TMI-2) core and the subsequent discharge of reactor coolant to the reactor and auxiliary buildings resulted in extremely hostile radiological environments in the TMI-2 plant. High fission product surface contamination and radiation levels necessitated the implementation of innovative techniques and methods in performing cleanup operations while assuring effective as low as reasonably achievable (ALARA) practices. The approach utilized by GPU Nuclear throughout the cleanup in applying protective clothing requirements was to consider the overall health risk to the worker including factors such as cardiopulmonary stress, visual and hearing acuity, and heat stress. In applying protective clothing requirements, trade-off considerations had to be made between preventing skin contaminations and possibly overprotecting the worker, thus impacting his ability to perform his intended task at maximum efficiency and in accordance with ALARA principles. The paper discusses the following topics: protective clothing-general use, beta protection, skin contamination, training, personnel access facility, and heat stress

  7. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    Science.gov (United States)

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  8. Applying the “WSUD potential”-tool in the framework of the Copenhagen Climate Adaptation and Cloudburst Management Plans

    DEFF Research Database (Denmark)

    Lerer, Sara Maria; Madsen, Herle Mo; Smit Andersen, Jonas;

    2016-01-01

    Water Sensitive Urban Design (WSUD) is still in the “Opportunity”-phase of its stabilization process in Copenhagen, Denmark, indicating that there are controversies surrounding its proper use and the regulatory framework is not completely adapted to the new technology. In 2015 private land owners...... Adaptation Plan and general service goal on the other side, which may result in over-sizing of the collective stormwater management system....... in Denmark could get up to 100% of the construction costs of climate adaptation measures funded by the utility companies, which resulted in a race to apply for this co-funding plan. In this study we briefly review the climate adaptation framework in Copenhagen, and then discuss how well different scenarios...... of WSUD in a case study area interact with this framework. The impacts of the different scenarios are assessed using the “WSUD-potential” tool, which builds upon the Three Points Approach. The results indicate that there is a schism between the city’s Cloudburst Management Plan on one side and its Climate...

  9. Array model interpolation and subband iterative adaptive filters applied to beamforming-based acoustic echo cancellation.

    Science.gov (United States)

    Bai, Mingsian R; Chi, Li-Wen; Liang, Li-Huang; Lo, Yi-Yang

    2016-02-01

    In this paper, an evolutionary exposition is given in regard to the enhancing strategies for acoustic echo cancellers (AECs). A fixed beamformer (FBF) is utilized to focus on the near-end speaker while suppressing the echo from the far end. In reality, the array steering vector could differ considerably from the ideal freefield plane wave model. Therefore, an experimental procedure is developed to interpolate a practical array model from the measured frequency responses. Subband (SB) filtering with polyphase implementation is exploited to accelerate the cancellation process. Generalized sidelobe canceller (GSC) composed of an FBF and an adaptive blocking module is combined with AEC to maximize cancellation performance. Another enhancement is an internal iteration (IIT) procedure that enables efficient convergence in the adaptive SB filters within a sample time. Objective tests in terms of echo return loss enhancement (ERLE), perceptual evaluation of speech quality (PESQ), word recognition rate for automatic speech recognition (ASR), and subjective listening tests are conducted to validate the proposed AEC approaches. The results show that the GSC-SB-AEC-IIT approach has attained the highest ERLE without speech quality degradation, even in double-talk scenarios. PMID:26936567

  10. EPA Water Resources Adaptation Program (WRAP) Research and Development Activities Methods and Techniques

    Science.gov (United States)

    Adaptation to environmental change is not a new concept. Humans have shown throughout history a capacity for adapting to different climates and environmental changes. Farmers, foresters, civil engineers, have all been forced to adapt to numerous challenges to overcome adversity...

  11. Adaptive Hypermedia Technique and its Applications in Intelligent CAI%自适应超媒体技术及其在智能化CAI中的应用

    Institute of Scientific and Technical Information of China (English)

    周学海; 周立; 龚育昌; 赵振西

    2001-01-01

    将自适应超媒体的方法和技术应用于智能教学系统,可充分体现因材施教的思想,提高学生的学习效果。文章介绍了自适应超媒体系统的关键方法和技术,描述了智能教学系统的组成与结构,然后结合自适应教学系统KDAES的研制讨论了自适应技术对教学系统智能化的支持及系统核心模块--学生模型的构建。%Applying the technique of adaptive hypermedia to intelligent educational systems will reflect the thought of individualizing the course material to different students and thus will enhance the learning performance.This paper describes the key methods and techniques of adaptive hypermedia systems and the structure of intelligent educational systems.Then with the development of an adaptive educational system named KDAES,We discuss how adaptive techniques provide support to the intelligence of CAI and the creation of student model,which is a kernel module of system.

  12. APPLICATION OF SUBBAND ADAPTIVE THRESHOLDING TECHNIQUE WITH NEIGHBOURHOOD PIXEL FILTERING FOR DENOISING MRI IMAGES

    Directory of Open Access Journals (Sweden)

    S. KALAVATHY

    2012-02-01

    Full Text Available The image de-noising naturally corrupted by noise is a classical problem in the field of signal or image processing. Image denoising has become an essential exercise in medical imaging especially the Magnetic Resonance Imaging (MRI..We propose a new method for MRI restoration. Because MR magnitude images suffer from a contrast-reducing signal-dependent bias. Also the noise is often assumed to be white, however a widely used acquisition technique to decrease the acquisition time gives rise to correlated noise. Subband adaptive thresholding technique based on wavelet coefficient along with Neighbourhood Pixel Filtering Algorithm (NPFA for noise suppression of Magnetic Resonance Images (MRI is presented in this paper. Astatistical model is proposed to estimate the noise variance for each coefficient based on the subband using Maximum Likelihood (ML estimator or a Maximum a Posterior (MAP estimator. Also this model describes a new method for suppression of noise by fusing the wavelet denoising technique with optimized thresholding function. This is achieved by including a multiplying factor (α to make the threshold value dependent on decomposition level. By finding Neighbourhood Pixel Difference (NPD and adding NPFA along with subband thresholding the clarity of the image is improved. The filtered value is generated by minimizing NPD and Weighted Mean Square Error (WMSE using method of leastsquare.Areduction in noise pixel is well observedon replacing the optimal weight namely NPFA filter solution with the noisy value of the current pixel. Due to this NPFA filter gains the effect of both high pass and low pass filter. Hence the proposed technique yields significantly superior image quality by preserving the edges, producing a better PSNR value. To confirm the efficiency this is further compared with Median filter, Weiner Filter, Subband thresholding technique along with NPFA filter.

  13. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Qingguo, E-mail: renqg83@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Dewan, Sheilesh Kumar, E-mail: sheilesh_d1@hotmail.com [Department of Geriatrics, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Ming, E-mail: minli77@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Jianying, E-mail: Jianying.Li@med.ge.com [CT Imaging Research Center, GE Healthcare China, Beijing (China); Mao, Dingbiao, E-mail: maodingbiao74@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Wang, Zhenglei, E-mail: Williswang_doc@yahoo.com.cn [Department of Radiology, Shanghai Electricity Hospital, Shanghai 200050 (China); Hua, Yanqing, E-mail: cjr.huayanqing@vip.163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China)

    2012-10-15

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI{sub vol}) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique.

  14. ADAPTIVE HARMONIC CANCELLATION APPLIED IN ELECTRO-HYDRAULIC SERVO SYSTEM WITH ANN

    Institute of Scientific and Technical Information of China (English)

    Yao Jianjun; Wu Zhenshun; Han Junwei; Yue Donghai

    2004-01-01

    The method for harmonic cancellation based on artificial neural network (ANN) is proposed. The task is accomplished by generating reference signal with frequency that should be eliminated from the output. The reference input is weighted by the ANN in such a way that it closely matches the harmonic. The weighted reference signal is added to the fundamental signal such that the output harmonic is cancelled leaving the desired signal alone. The weights of ANN are adjusted by output harmonic, which is isolated by a bandpass filter. The above concept is used as a basis for the development of adaptive harmonic cancellation (AHC) algorithm. Simulation results performed with a hydraulic system demonstrate the efficiency and validity of the proposed AHC control scheme.

  15. Modern structure of methods and techniques of marketing research, applied by the world and Ukrainian research companies

    Directory of Open Access Journals (Sweden)

    Bezkrovnaya Yulia

    2015-08-01

    Full Text Available The article presents the results of empiric justification of the structure of methods and techniques of marketing research of consumer decisions, applied by the world and Ukrainian research companies.

  16. Optical Cluster-Finding with an Adaptive Matched-Filter Technique: Algorithm and Comparison with Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.

    2007-10-29

    We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.

  17. Evaluation of internal adaptation in ceramic and composite resin inlays by silicon replica technique.

    Science.gov (United States)

    Karakaya, S; Sengun, A; Ozer, F

    2005-06-01

    This study was aimed at investigating the internal adaptation of a ceramic (Ceramco II) and two composite resin inlay materials (SureFil and 3M Filtek Z 250) using silicon replica technique as an indicator. Forty-five standard mesial-occlusal-distal (MOD) cavities were prepared into brass moulds by using computer numerically controlled system. Inlays were prepared according to manufacturers' instructions with indirect methods. Replicas of the prepared cavities and inlays were produced with a polyvinyl siloxane material (Elite H-D). The spaces between inlays and cavities were filled by different coloured light-body polyvinyl siloxane material. Two parallel slices (mesio-distally) were obtained from the replicas with a sharp blade. Different coloured polyvinyl siloxane material thickness between cavity and inlay was measured at seven points (mesial, occlusal and distal). The data were evaluated with anova and Tukey's honestly significantly different (HSD) statistical tests. In the SureFil and Ceramco II groups, the sizes of the contraction gaps at mesial and distal gingival floors were greater than that of the occlusal marginal walls. In comparison of gap formation at occlusal regions, while the 3M composite group showed highest gap values (204.33 +/- 75.45 microm), the Ceramco II group revealed the lowest (141.17 +/- 23.66 microm) (P 0.05). In conclusion, our results showed that ceramic inlays did not confer any big advantage for internal adaptation over the composite inlays.

  18. Utilizing a Magnetic Abrasive Finishing Technique (MAF Via Adaptive Nero Fuzzy(ANFIS

    Directory of Open Access Journals (Sweden)

    Amer A. Moosa

    2015-07-01

    Full Text Available An experimental study was conducted for measuring the quality of surface finishing roughness using magnetic abrasive finishing technique (MAF on brass plate which is very difficult to be polish by a conventional machining process where the cost is high and much more susceptible to surface damage as compared to other materials. Four operation parameters were studied, the gap between the work piece and the electromagnetic inductor, the current that generate the flux, the rotational Spindale speed and amount of abrasive powder size considering constant linear feed movement between machine head and workpiece. Adaptive Neuro fuzzy inference system (ANFIS was implemented for evaluation of a series of experiments and a verification with respect to specimen roughness change has been optimized and usefully achieved by obtained results were an average of the error between the surface roughness predicted by model simulation and that of direct measure is 2.0222 %.

  19. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    CERN Document Server

    Lichtenberg, Tim

    2015-01-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. Numerical approaches provide a method to tackle the weaknesses of current planet formation models and are an important tool to close gaps in poorly constrained areas. We present a global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code ENZO. With this setup, we explore the impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison, and study the effects of varying the disk radius. Adopting a marginally stable disk profile (Q_init=1), we validate the...

  20. A Novel Implementation of RISI Controller Employing Adaptive Clock Gating Technique

    Directory of Open Access Journals (Sweden)

    M.Kamaraju

    2011-11-01

    Full Text Available With the scaling of technology and the need for higher performance and more functionality power dissipation is becoming a major issue for controller design. Interrupt based programming is widely used for interfacing a processor with peripherals. The proposed architecture implements a mechanism which combines interrupt controller and RIS (Reduced Instruction Set CPU (Central processing unit on a single die. RISI Controller takes only one cycle for both interrupt request generation and acknowledgement. The architecture have a dynamic control unit which consists of a program flow controller, interrupt controller and I/O controller. Adaptive clock gating technique is used to reduce power consumption in the dynamic control unit. The controller consumes a power of 174µw@1MHz and is implemented in verilog HDL using Xilinx platform

  1. The application and evaluation of adaptive hypermedia techniques in Web-based medical education

    Directory of Open Access Journals (Sweden)

    Muan Hong Ng

    2002-12-01

    Full Text Available This article discusses the design issues involved in delivering Web-based learning materials. An existing application in the medical domain - JointZone - is used to illustrate how personalization and an interactive environment can be incorporated into Web-based learning. This work applies the combination of an adaptive hypermedia, situated-learning approach and hypermedia linking concepts to facilitate online learning. A usability study was carried out on the work described and an evaluation was undertaken to measure the effect of personalization on various learning factors. The evaluation outcome was analysed subjectively and objectively. The results proved to be contradictory but, nevertheless, the work gives new insights into the use of technology to support learning

  2. Evaluation of internal adaptation of Class V resin composite restorations using three techniques of polymerization

    Directory of Open Access Journals (Sweden)

    José Carlos Pereira

    2007-02-01

    Full Text Available OBJECTIVE: The purpose of this in vitro study was to evaluate the internal adaptation of Class V composite restorations to the cavity walls using three different techniques of polymerization. METHODS: Standard cavities were prepared on the buccal and lingual surfaces of 24 extracted human third molars with margins located above and below the cementoenamel junction. Restorations were placed in one increment using two restorative systems: 3M Filtek A110/ Single Bond (M and 3M Filtek Z250/ Single Bond (H in the same tooth, randomly in the buccal and lingual surfaces. Resin composites were polymerized using three techniques: Group 1 - Conventional (60 s - 600 mW/cm²; Group 2 - Soft-start (20 s - 200 mW/cm² , 40 s - 600 mW/cm²; Group 3 - Pulse Activation (3 s - 200 mW/cm², 3-min hiatus, 57 s - 600 mW/cm². Buccolingual sections were polished, impressions taken and replicated. Specimens were assessed under scanning electron microscopy up to X1000 magnification. Scores were given for presence or absence of gaps (0 - no gap; 1 - gap in one wall; 2 - gap in two walls; 3 - gap in three walls. RESULTS: The mean scores of the groups were (±SD were: G1M-3.0 (± 0.0; G2M-2.43 (± 0.8; G3M- 1.71 (± 0.9; G1H- 2.14 (± 1.2; G2H- 2.00 (± 0.8; G3H- 1.67 (± 1.1. Data were analyzed using Kruskal-Wallis and Dunnet's tests. No statistically significant difference (p>0.05 was found among groups. Gaps were observed in all groups. CONCLUSIONS: The photocuring technique and the type of resin composite had no influence on the internal adaptation of the material to the cavity walls. A positive effect was observed when the slow polymerization techniques were used.

  3. Adaptive one-dimensional dimming technique for liquid crystal displays with low power consumption and high image quality

    Science.gov (United States)

    Kim, Seung-Ryeol; Lee, Seung-Woo

    2015-07-01

    An adaptive one-dimensional (1-D) dimming technique for liquid crystal displays that compensates for nonuniform backlight distribution is proposed. Dimming techniques that do not consider luminance distribution may cause severe visual artifacts, such as a block artifact. However, an adaptive 1-D dimming technique that considers luminance distribution can reduce power consumption without causing any visual artifacts. Hardware implementation results verified that our method achieved lower power consumption compared to nondimming techniques and removed block artifacts from International Electrotechnical Commission 62087 standard images. The power consumption using the proposed method ranged from 85.5% to 94.7% compared to nondimming techniques. Furthermore, the contrast ratio increased by up to 231% and 165% on average compared to nondimming techniques.

  4. Applying a complex adaptive system's understanding of health to primary care [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Johannes Bircher

    2016-09-01

    Full Text Available This paper explores the diagnostic and therapeutic potential of a new concept of health. Investigations into the nature of health have led to a new definition that explains health as a complex adaptive system (CAS and is based on five components (a-e. Humans like all biological creatures must satisfactorily respond to (a the demands of life. For this purpose they need (b a biologically given potential (BGP and (c a personally acquired potential (PAP. These properties of individuals are embedded within (d social and (e environmental determinants of health. Between these five components of health there are 10 complex interactions that justify viewing health as a CAS. In each patient, the current state of health as a CAS evolved from the past, will move forward to a new future, and has to be analyzed and treated as an autonomous whole. A diagnostic procedure is suggested as follows: together with the patient, the five components and 10 complex interactions are assessed. This may help patients to better understand their situations and to recognize possible next steps that may be useful in order to evolve toward better health by themselves. In this process mutual trust in the patient-physician interaction is critical. The described approach offers new possibilities for helping patients improve their health prospects.

  5. Proceedings of PEIA Forum 2007 : adapting and applying California's greenhouse gas strategies in Canada

    International Nuclear Information System (INIS)

    The key challenge in addressing climate change lies in identifying and implementing cost-effective measures to reduce greenhouse gas (GHG) emissions. The purpose of this forum was to stimulate action for reducing GHGs in British Columbia, the western provinces and Canada. The successes realized in California which are adaptable in BC and Canada were highlighted. In September 2006, California demonstrated leadership in taking determined action on climate change, with its signing of the California Global Warming Solutions Act. This landmark legislation calls for GHG reductions to 1990 levels by 2020, and 80 per cent below 1990 levels by 2050. The BC Energy plan calls for an aggressive target to reduce GHG emissions to 33 per cent below current levels by 2020, which will place emissions 10 per cent below 1990 levels; net zero GHG emissions from all electric power plants by 2016; acquiring 50 per cent of BC Hydro's new resource needs through conservation by 2020; ensuring electricity self-sufficiency by 2016; and, establishing a standing offer for clean electricity projects up to 10 megawatts. In May 2007, the province of British Columbia demonstrated a commitment to follow California's lead in GHG control, and to collaborate on projects such as the Hydrogen Highway. The actions are intended to make a significant contribution to the control of energy and greenhouse gas emissions in British Columbia and Canada. The conference featured 6 presentations, of which 2 have been catalogued separately for inclusion in this database. tabs., figs

  6. The Study of Mining Activities and their Influences in the Almaden Region Applying Remote Sensing Techniques

    International Nuclear Information System (INIS)

    This scientific-technical report is a part of an ongoing research work carried out by Celia Rico Fraile in order to obtain the Diploma of Advanced Studies as part of her PhD studies. This work has been developed in collaboration with the Faculty of Science at The Universidad Autonoma de Madrid and the Department of Environment at CIEMAT. The main objective of this work was the characterization and classification of land use in Almaden (Ciudad Real) during cinnabar mineral exploitation and after mining activities ceased in 2002, developing a methodology focused on the integration of remote sensing techniques applying multispectral and hyper spectral satellite data. By means of preprocessing and processing of data from the satellite images as well as data obtained from field campaigns, a spectral library was compiled in order to obtain representative land surfaces within the study area. Monitoring results show that the distribution of areas affected by mining activities is rapidly diminishing in recent years. (Author) 130 refs

  7. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    CERN Document Server

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  8. Phase-ratio technique as applied to the assessment of lunar surface roughness

    Science.gov (United States)

    Kaydash, Vadym; Videen, Gorden; Shkuratov, Yuriy

    Regoliths of atmosphereless celestial bodies demonstrate prominent light backscattering that is common for particulate surfaces. This occurs over a wide range of phase angles and can be seen in the phase function [1]. The slope of the function may characterize the complexity of planetary surface structure. Imagery of such a parameter suggests that information can be obtained about the surface, like variations of unresolved surface roughness and microtopography [2]. Phase-ratio imagery allows one to characterize the phase function slope. This imagery requires the ratio of two co-registered images acquired at different phase angles. One important advantage of the procedure is that the inherent albedo variations of the surface are suppressed, and, therefore, the resulting image is sensitive to the surface structure variation [2,3]. The phase-ratio image characterizes surface roughness variation at spatial scales on the order of the incident wavelengths to that of the image resolution. Applying the phase-ratio technique to ground-based telescope data has allowed us to find new lunar surface formations in the southern part of Oceanus Procellarum. These are suggested to be weak swirls [4]. We also combined the phase-ratio technique with the space-derived photometry data acquired from the NASA Lunar Reconnaissance Orbiter with high spatial resolution. Thus we exploited the method to analyze the sites of Apollo landings and Soviet sample-return missions. Phase-ratio imagery has revealed anomalies of the phase-curve slope indicating a smoothing of the surface microstructure at the sites caused by dust uplifted by the engine jets of the descent and ascent modules [5,6]. Analysis of phase-ratios helps to understand how the regolith properties have been affected by robotic and human activity on the Moon [7,8]. We have demonstrated the use of the method to search for fresh natural disturbances of surface structure, e.g., to detect areas of fresh slumps, accumulated material on

  9. Online Fault Identification Based on an Adaptive Observer for Modular Multilevel Converters Applied to Wind Power Generation Systems

    DEFF Research Database (Denmark)

    Liu, Hui; Ma, Ke; Loh, Poh Chiang;

    2015-01-01

    detected by analyzing the difference among the three output load currents, while the localization of the faulty switches is achieved by comparing the estimation results by the adaptive observer. In contrast to other methods that use additional sensors or devices, the presented technique uses the measured...... phase currents only, which are already available for MMC control. In additional, its operation, effectiveness and robustness are confirmed by simulation results under different operating conditions and load conditions....... and post-fault maintenance. Therefore, in this paper, an effective fault diagnosis technique for real-time diagnosis of the switching device faults covering both the open-circuit faults and the short-circuit faults in MMC sub-modules is proposed, in which the faulty phase and the fault type is...

  10. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    Science.gov (United States)

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection. PMID:26811073

  11. Efficient Cancer Classification using Fast Adaptive Neuro-Fuzzy Inference System (FANFIS based on Statistical Techniques

    Directory of Open Access Journals (Sweden)

    K.Ananda Kumar

    2011-09-01

    Full Text Available The increase in number of cancer is detected throughout the world. This leads to the requirement of developing a new technique which can detect the occurrence the cancer. This will help in better diagnosis in order to reduce the cancer patients. This paper aim at finding the smallest set of genes that can ensure highly accurate classification of cancer from micro array data by using supervised machine learning algorithms. The significance of finding the minimum subset is three fold: a The computational burden and noise arising from irrelevant genes are much reduced; b the cost for cancer testing is reduced significantly as it simplifies the gene expression tests to include only a very small number of genes rather than thousands of genes; c it calls for more investigation into the probable biological relationship between these small numbers of genes and cancer development and treatment. The proposed method involves two steps. In the first step, some important genes are chosen with the help of Analysis of Variance (ANOVA ranking scheme. In the second step, the classification capability is tested for all simple combinations of those important genes using a better classifier. The proposed method uses Fast Adaptive Neuro-Fuzzy Inference System (FANFIS as a classification model. This classification model uses Modified Levenberg-Marquardt algorithm for learning phase. The experimental results suggest that the proposed method results in better accuracy and also it takes lesser time for classification when compared to the conventional techniques.

  12. Adapt

    Science.gov (United States)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  13. Social Science at the Center for Adaptive Optics: Synergistic Systems of Program Evaluation, Applied Research, Educational Assessment, and Pedagogy

    Science.gov (United States)

    Goza, B. K.; Hunter, L.; Shaw, J. M.; Metevier, A. J.; Raschke, L.; Espinoza, E.; Geaney, E. R.; Reyes, G.; Rothman, D. L.

    2010-12-01

    This paper describes the interaction of four elements of social science as they have evolved in concert with the Center for Adaptive Optics Professional Development Program (CfAO PDP). We hope these examples persuade early-career scientists and engineers to include social science activities as they develop grant proposals and carry out their research. To frame our discussion we use a metaphor from astronomy. At the University of California Santa Cruz (UCSC), the CfAO PDP and the Educational Partnership Center (EPC) are two young stars in the process of forming a solar system. Together, they are surrounded by a disk of gas and dust made up of program evaluation, applied research, educational assessment, and pedagogy. An idea from the 2001 PDP intensive workshops program evaluation developed into the Assessing Scientific Inquiry and Leadership Skills (AScILS) applied research project. In iterative cycles, AScILS researchers participated in subsequent PDP intensive workshops, teaching social science while piloting AScILS measurement strategies. Subsequent "orbits" of the PDP program evaluation gathered ideas from the applied research and pedagogy. The denser regions of this disk of social science are in the process of forming new protoplanets as tools for research and teaching are developed. These tools include problem-solving exercises or simulations of adaptive optics explanations and scientific reasoning; rubrics to evaluate the scientific reasoning simulation responses, knowledge regarding inclusive science education, and student explanations of science/engineering inquiry investigations; and a scientific reasoning curriculum. Another applied research project is forming with the design of a study regarding how to assess engineering explanations. To illustrate the mutual shaping of the cross-disciplinary, intergenerational group of educational researchers and their projects, the paper ends with a description of the professional trajectories of some of the

  14. Sequential Adaptive RBF-Fuzzy Variable Structure Control Applied to Robotics Systems

    Directory of Open Access Journals (Sweden)

    Mohammed Salem

    2014-08-01

    Full Text Available In this paper, we present a combination of sequential trained radial basis function networks and fuzzy techniques to enhance the variable structure controllers dedicated to robotics systems. In this aim, four RBFs networks were used to estimate the model based part parameters (Inertia, Centrifugal and Coriolis, Gravity and Friction matrices of a variable structure controller so to respond to model variation and disturbances, a sequential online training algorithm based on Growing-Pruning "GAP" strategy and Kalman filter was implemented. To eliminate the chattering effect, the corrective control of the VS control was computed by a fuzzy controller. Simulations are carried out to control three degrees of freedom SCARA robot manipulator where the obtained results show good disturbance rejection and chattering elimination.

  15. RECENT ADAPTATIONS OF THE LEGAL STANDARDS APPLIED TO TAX LIABILITIES THROUGH GOVERNMENT ORDINANCES

    Directory of Open Access Journals (Sweden)

    Ionel\tBOSTAN

    2015-06-01

    Full Text Available Our endeavour is directed at revealing certain difficulties identified during the actual process of levying the government revenue that have “lasted” in time, as well as the methods used in solving or merely alleviating such difficulties, as imposed and applied by the Executive authority. Among the above mentioned issues, we will specifically refer to the measures taken to discourage tax payers from using arrears as a source to fund their own activities, with important mentions on the special correction (“undeclared tax penalty” for cases when certain sums payable to the public budget are not declared (either totally or partially. The 2 part approaches the setting up of the ancillary obligations system which is specifically directed at protecting the real value of the fiscal claims and at sanctioning defaults of payment upon the due date.

  16. Complementary testing techniques applied to obtain the freeze-thaw resistance of concrete

    Directory of Open Access Journals (Sweden)

    Romero, H. L.

    2015-03-01

    Full Text Available Most of the standards that evaluate the resistance of concrete against freeze-thaw cycles (FTC are based on the loss of weight due to scaling. Such procedures are useful but do not provide information about the microstructural deterioration of the concrete. The test procedure needs to be stopped after several FTCs for weighing the loss of material by scaling. This paper proposes the use of mercury-intrusion-porosimetry and thermogravimetric analysis for assessing the microstructural damage of concrete during FTCs. Continuous strain measurement can be performed without stopping the FTCs. The combination of the above techniques with the freeze-thaw resistance standards provides better and more precise information about concrete damage. The proposed procedure is applied to an ordinary concrete, a concrete with silica fume addition and one with an air-entraining agent. The test results showed that the three techniques used are suitable and useful to be employed as complementary to the standards.Las normas para evaluar la resistencia del hormigón a los ciclos hielo-deshielo (CHD se basan habitualmente en la pérdida de peso por descascarillamiento. Son útiles, pero no proporcionan información sobre el deterioro microestructural del hormigón. Además, exigen detener el ensayo para pesar el material desprendido. Se propone el uso complementario de la porosimetría por intrusión de mercurio y el análisis termogravimétrico para evaluar el daño microestructural del hormigón durante los CHDs. La medida continua de las deformaciones puede hacerse sin detener los CHDs. La combinación de las técnicas enumeradas con las normas de ensayo proporciona información más completa sobre el daño del hormigón. El procedimiento propuesto se aplica a un hormigón convencional, a un hormigón con adición de humo de sílice y a otro con aireante. Los resultados de los ensayos mostraron que las tres técnicas usadas son útiles y adecuadas como complemento a

  17. Design and Implementation of Key Techniques for Mobile Ad hoc Network Adaptive QoS Provisioning Framework

    Institute of Scientific and Technical Information of China (English)

    YAOYinxiong; LIUJianxun; TANGXinhuai

    2004-01-01

    MAQF is a newly proposed adaptive QoS provisioning framework for Mobile Ad hoc network (MANET) by the authors. Through modifying the architecture of INSIGNIA and adding some components, MAQF overcomes many disadvantages appearing in related works and supports QoS guarantees for MANET. This paper focuses on the design and implementation of some key techniques in MAQF, including QoS routing, signaling in band, adaptive control mechanism, dynamic resource adaptation algorithm and, etc. Simulation results are presented and have verified the validity of MAQF.

  18. Adaptive gain, equalization, and wavelength stabilization techniques for silicon photonic microring resonator-based optical receivers

    Science.gov (United States)

    Palermo, Samuel; Chiang, Patrick; Yu, Kunzhi; Bai, Rui; Li, Cheng; Chen, Chin-Hui; Fiorentino, Marco; Beausoleil, Ray; Li, Hao; Shafik, Ayman; Titriku, Alex

    2016-03-01

    Interconnect architectures based on high-Q silicon photonic microring resonator devices offer a promising solution to address the dramatic increase in datacenter I/O bandwidth demands due to their ability to realize wavelength-division multiplexing (WDM) in a compact and energy efficient manner. However, challenges exist in realizing efficient receivers for these systems due to varying per-channel link budgets, sensitivity requirements, and ring resonance wavelength shifts. This paper reports on adaptive optical receiver design techniques which address these issues and have been demonstrated in two hybrid-integrated prototypes based on microring drop filters and waveguide photodetectors implemented in a 130nm SOI process and high-speed optical front-ends designed in 65nm CMOS. A 10Gb/s powerscalable architecture employs supply voltage scaling of a three inverter-stage transimpedance amplifier (TIA) that is adapted with an eye-monitor control loop to yield the necessary sensitivity for a given channel. As reduction of TIA input-referred noise is more critical at higher data rates, a 25Gb/s design utilizes a large input-stage feedback resistor TIA cascaded with a continuous-time linear equalizer (CTLE) that compensates for the increased input pole. When tested with a waveguide Ge PD with 0.45A/W responsivity, this topology achieves 25Gb/s operation with -8.2dBm sensitivity at a BER=10-12. In order to address microring drop filters sensitivity to fabrication tolerances and thermal variations, efficient wavelength-stabilization control loops are necessary. A peak-power-based monitoring loop which locks the drop filter to the input wavelength, while achieving compatibility with the high-speed TIA offset-correction feedback loop is implemented with a 0.7nm tuning range at 43μW/GHz efficiency.

  19. Appraisal of adaptive neuro-fuzzy computing technique for estimating anti-obesity properties of a medicinal plant.

    Science.gov (United States)

    Kazemipoor, Mahnaz; Hajifaraji, Majid; Radzi, Che Wan Jasimah Bt Wan Mohamed; Shamshirband, Shahaboddin; Petković, Dalibor; Mat Kiah, Miss Laiha

    2015-01-01

    This research examines the precision of an adaptive neuro-fuzzy computing technique in estimating the anti-obesity property of a potent medicinal plant in a clinical dietary intervention. Even though a number of mathematical functions such as SPSS analysis have been proposed for modeling the anti-obesity properties estimation in terms of reduction in body mass index (BMI), body fat percentage, and body weight loss, there are still disadvantages of the models like very demanding in terms of calculation time. Since it is a very crucial problem, in this paper a process was constructed which simulates the anti-obesity activities of caraway (Carum carvi) a traditional medicine on obese women with adaptive neuro-fuzzy inference (ANFIS) method. The ANFIS results are compared with the support vector regression (SVR) results using root-mean-square error (RMSE) and coefficient of determination (R(2)). The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the ANFIS approach. The following statistical characteristics are obtained for BMI loss estimation: RMSE=0.032118 and R(2)=0.9964 in ANFIS testing and RMSE=0.47287 and R(2)=0.361 in SVR testing. For fat loss estimation: RMSE=0.23787 and R(2)=0.8599 in ANFIS testing and RMSE=0.32822 and R(2)=0.7814 in SVR testing. For weight loss estimation: RMSE=0.00000035601 and R(2)=1 in ANFIS testing and RMSE=0.17192 and R(2)=0.6607 in SVR testing. Because of that, it can be applied for practical purposes. PMID:25453384

  20. Applying Adaptive Agricultural Management & Industrial Ecology Principles to Produce Lower- Carbon Ethanol from California Energy Beets

    Science.gov (United States)

    Alexiades, Anthy Maria

    The life cycle assessment of a proposed beet-to-ethanol pathway demonstrates how agricultural management and industrial ecology principles can be applied to reduce greenhouse gas emissions, minimize agrochemical inputs and waste, provide ecosystem services and yield a lower-carbon fuel from a highly land-use efficient, first-generation feedstock cultivated in California. Beets grown in California have unique potential as a biofuel feedstock. A mature agricultural product with well-developed supply chains, beet-sugar production in California has contracted over recent decades, leaving idle production capacity and forcing growers to seek other crops for use in rotation or find a new market for beets. California's Low Carbon Fuel Standard (LCFS) faces risk of steeply-rising compliance costs, as greenhouse gas reduction targets in the transportation sector were established assuming commercial volumes of lower-carbon fuels from second-generation feedstocks -- such as residues, waste, algae and cellulosic crops -- would be available by 2020. The expected shortfall of cellulosic ethanol has created an immediate need to develop lower-carbon fuels from readily available feedstocks using conventional conversion technologies. The life cycle carbon intensity of this ethanol pathway is less than 28 gCO2e/MJEthanol: a 72% reduction compared to gasoline and 19% lower than the most efficient corn ethanol pathway (34 gCO2e/MJ not including indirect land use change) approved under LCFS. The system relies primarily on waste-to-energy resources; nearly 18 gCO2e/MJ are avoided by using renewable heat and power generated from anaerobic digestion of fermentation stillage and gasification of orchard residues to meet 88% of the facility's steam demand. Co-products displace 2 gCO2e/MJ. Beet cultivation is the largest source of emissions, contributing 15 gCO 2e/MJ. The goal of the study is to explore opportunities to minimize carbon intensity of beet-ethanol and investigate the potential

  1. Community capacity to acquire, assess, adapt, and apply research evidence: a survey of Ontario's HIV/AIDS sector

    Directory of Open Access Journals (Sweden)

    Rourke Sean B

    2011-05-01

    Full Text Available Abstract Background Community-based organizations (CBOs are important stakeholders in health systems and are increasingly called upon to use research evidence to inform their advocacy, program planning, and service delivery. To better support CBOs to find and use research evidence, we sought to assess the capacity of CBOs in the HIV/AIDS sector to acquire, assess, adapt, and apply research evidence in their work. Methods We invited executive directors of HIV/AIDS CBOs in Ontario, Canada (n = 51 to complete the Canadian Health Services Research Foundation's "Is Research Working for You?" survey. Findings Based on responses from 25 organizations that collectively provide services to approximately 32,000 clients per year with 290 full-time equivalent staff, we found organizational capacity to acquire, assess, adapt, and apply research evidence to be low. CBO strengths include supporting a culture that rewards flexibility and quality improvement, exchanging information within their organization, and ensuring that their decision-making processes have a place for research. However, CBO Executive Directors indicated that they lacked the skills, time, resources, incentives, and links with experts to acquire research, assess its quality and reliability, and summarize it in a user-friendly way. Conclusion Given the limited capacity to find and use research evidence, we recommend a capacity-building strategy for HIV/AIDS CBOs that focuses on providing the tools, resources, and skills needed to more consistently acquire, assess, adapt, and apply research evidence. Such a strategy may be appropriate in other sectors and jurisdictions as well given that CBO Executive Directors in the HIV/AIDS sector in Ontario report low capacity despite being in the enviable position of having stable government infrastructure in place to support them, benefiting from long-standing investment in capacity building, and being part of an active provincial network. CBOs in other

  2. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    Energy Technology Data Exchange (ETDEWEB)

    Palit, Mousumi [Department of Electronics and Telecommunication Engineering, Central Calcutta Polytechnic, Kolkata 700014 (India); Tudu, Bipan, E-mail: bt@iee.jusl.ac.in [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Bhattacharyya, Nabarun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Dutta, Ankur; Dutta, Pallab Kumar [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Jana, Arun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Bandyopadhyay, Rajib [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Chatterjee, Anutosh [Department of Electronics and Communication Engineering, Heritage Institute of Technology, Kolkata 700107 (India)

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  3. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea.

    Science.gov (United States)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  4. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids.

    Directory of Open Access Journals (Sweden)

    Changho Jhin

    Full Text Available One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS applied quantitative structure-activity relationship models (QSAR were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models.

  5. Adaptive Neuro-Fuzzy Inference System Applied QSAR with Quantum Chemical Descriptors for Predicting Radical Scavenging Activities of Carotenoids.

    Science.gov (United States)

    Jhin, Changho; Hwang, Keum Taek

    2015-01-01

    One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models. PMID:26474167

  6. Evaluation of Turbulence Models in Predicting Hypersonic and Subsonic Base Flows Using Grid Adaptation Techniques

    Institute of Scientific and Technical Information of China (English)

    YOU Yancheng; BUANGA Bj(o)rn; HANNEMANN Volker; L(U)DEKE Heinrich

    2012-01-01

    The flows behind the base of a generic rocket,at both hypersonic and subsonic flow conditions,are numerically studied.The main concerns are addressed to the evaluation of turbulence models and the using of grid adaptation techniques.The investigation focuses on two configurations,related to hypersonic and subsonic experiments.The applicability tests of different turbulence models are conducted on the level of two-equation models calculating the steady state solution of the Reynolds-averaged Navier-Stokes(RANS) equations.All used models,the original Wilcox k-ω,the Menter shear-stress transport (SST) and the explicit algebraic Reynolds stress model(EARSM) formulation,predict an asymmetric base flow in both cases caused by the support of the models.A comparison with preliminary experimental results indicates a preference for the SST and EARSM results over the results from the older k-ω model.Sensitivity studies show no significant influence of the grid topology or the location of the laminar to turbulent transition on the base flow field,but a strong influence of even small angles of attack is reported from the related experiments.

  7. A Novel Adaptive Elite-Based Particle Swarm Optimization Applied to VAR Optimization in Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Ying-Yi Hong

    2014-01-01

    Full Text Available Particle swarm optimization (PSO has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1 appending the mean search to the original approach and (2 pruning/cloning particles. The mean search, leading to stable convergence, helps the iterative process coordinate between the global and local searches. The mean of the particles and standard deviation of the distances between pairs of particles are utilized to prune distant particles. The best particle is cloned and it replaces the pruned distant particles in the elite strategy. To evaluate the performance and generality of the proposed method, four benchmark functions were tested by traditional PSO, chaotic PSO, differential evolution, and genetic algorithm. Finally, a realistic loss minimization problem in an electric power system is studied to show the robustness of the proposed method.

  8. Adaption of the temporal correlation coefficient calculation for temporal networks (applied to a real-world pig trade network).

    Science.gov (United States)

    Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim

    2016-01-01

    The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods. PMID:27026862

  9. An Adaptive Single-Well Stochastic Resonance Algorithm Applied to Trace Analysis of Clenbuterol in Human Urine

    Directory of Open Access Journals (Sweden)

    Shaofei Xie

    2012-02-01

    Full Text Available Based on the theory of stochastic resonance, an adaptive single-well stochastic resonance (ASSR coupled with genetic algorithm was developed to enhance the signal-to-noise ratio of weak chromatographic signals. In conventional stochastic resonance algorithm, there are two or more parameters needed to be optimized and the proper parameters values were obtained by a universal searching within a given range. In the developed ASSR, the optimization of system parameter was simplified and automatic implemented. The ASSR was applied to the trace analysis of clenbuterol in human urine and it helped to significantly improve the limit of detection and limit of quantification of clenbuterol. Good linearity, precision and accuracy of the proposed method ensure that it could be an effective tool for trace analysis and the improvement of detective sensibility of current detectors.

  10. Automatic online adaptive radiation therapy techniques for targets with significant shape change: a feasibility study.

    Science.gov (United States)

    Court, Laurence E; Tishler, Roy B; Petit, Joshua; Cormack, Robert; Chin, Lee

    2006-05-21

    This work looks at the feasibility of an online adaptive radiation therapy concept that would detect the daily position and shape of the patient, and would then correct the daily treatment to account for any changes compared with planning position. In particular, it looks at the possibility of developing algorithms to correct for large complicated shape change. For co-planar beams, the dose in an axial plane is approximately associated with the positions of a single multi-leaf collimator (MLC) pair. We start with a primary plan, and automatically generate several secondary plans with gantry angles offset by regular increments. MLC sequences for each plan are calculated keeping monitor units (MUs) and number of segments constant for a given beam (fluences are different). Bulk registration (3D) of planning and daily CT images gives global shifts. Slice-by-slice (2D) registration gives local shifts and rotations about the longitudinal axis for each axial slice. The daily MLC sequence is then created for each axial slice/MLC leaf pair combination, by taking the MLC positions from the pre-calculated plan with the nearest rotation, and shifting using a beam's-eye-view calculation to account for local linear shifts. A planning study was carried out using two head and neck region MR images of a healthy volunteer which were contoured to simulate a base-of-tongue treatment: one with the head straight (used to simulate the planning image) and the other with the head tilted to the left (the daily image). Head and neck treatment was chosen to evaluate this technique because of its challenging nature, with varying internal and external contours, and multiple degrees of freedom. Shape change was significant: on a slice-by-slice basis, local rotations in the daily image varied from 2 to 31 degrees, and local shifts ranged from -0.2 to 0.5 cm and -0.4 to 0.0 cm in right-left and posterior-anterior directions, respectively. The adapted treatment gave reasonable target coverage (100

  11. The efficiency of Lutz, Kato-Katz and Baermann-Moraes (adapted techniques association to the diagnosis of intestinal helmints

    Directory of Open Access Journals (Sweden)

    Henry Percy Willcox

    1991-12-01

    Full Text Available The association of Lutz/Kato-Katz and Lutz/Bermann-Moraes (adapted techniques was used to improve better results that ranged from 0.4 to 11 times in the search of eggs of Ascaris lumbricoides, Schistosoma mansoni, Trichiuris trichiura, Taenia sp. and larvae of Strongyloides stercoralis.

  12. A multiblock grid generation technique applied to a jet engine configuration

    Science.gov (United States)

    Stewart, Mark E. M.

    1992-01-01

    Techniques are presented for quickly finding a multiblock grid for a 2D geometrically complex domain from geometrical boundary data. An automated technique for determining a block decomposition of the domain is explained. Techniques for representing this domain decomposition and transforming it are also presented. Further, a linear optimization method may be used to solve the equations which determine grid dimensions within the block decomposition. These algorithms automate many stages in the domain decomposition and grid formation process and limit the need for human intervention and inputs. They are demonstrated for the meridional or throughflow geometry of a bladed jet engine configuration.

  13. Calorimetric techniques applied to the thermodynamic study of interactions between proteins and polysaccharides

    Directory of Open Access Journals (Sweden)

    Monique Barreto Santos

    2016-08-01

    Full Text Available ABSTRACT: The interactions between biological macromolecules have been important for biotechnology, but further understanding is needed to maximize the utility of these interactions. Calorimetric techniques provide information regarding these interactions through the thermal energy that is produced or consumed during interactions. Notable techniques include differential scanning calorimetry, which generates a thermodynamic profile from temperature scanning, and isothermal titration calorimetry that provide the thermodynamic parameters directly related to the interaction. This review described how calorimetric techniques can be used to study interactions between proteins and polysaccharides, and provided valuable insight into the thermodynamics of their interaction.

  14. Quantitative thoracic CT techniques in adults: can they be applied in the pediatric population?

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Soon Ho [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Goo, Jin Mo [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Jongno-gu, Seoul (Korea, Republic of); Goo, Hyun Woo [University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Asan Medical Center, Seoul (Korea, Republic of)

    2013-03-15

    With the rapid evolution of the multidetector row CT technique, quantitative CT has started to be used in clinical studies for revealing a heterogeneous entity of airflow limitation in chronic obstructive pulmonary disease that is caused by a combination of lung parenchymal destruction and remodeling of the small airways in adults. There is growing evidence of a good correlation between quantitative CT findings and pathological findings, pulmonary function test results and other clinical parameters. This article provides an overview of current quantitative thoracic CT techniques used in adults, and how to translate these CT techniques to the pediatric population. (orig.)

  15. Intelligent Adaptation and Personalization Techniques in Computer-Supported Collaborative Learning

    CERN Document Server

    Demetriadis, Stavros; Xhafa, Fatos

    2012-01-01

    Adaptation and personalization have been extensively studied in CSCL research community aiming to design intelligent systems that adaptively support eLearning processes and collaboration. Yet, with the fast development in Internet technologies, especially with the emergence of new data technologies and the mobile technologies, new opportunities and perspectives are opened for advanced adaptive and personalized systems. Adaptation and personalization are posing new research and development challenges to nowadays CSCL systems. In particular, adaptation should be focused in a multi-dimensional way (cognitive, technological, context-aware and personal). Moreover, it should address the particularities of both individual learners and group collaboration. As a consequence, the aim of this book is twofold. On the one hand, it discusses the latest advances and findings in the area of intelligent adaptive and personalized learning systems. On the other hand it analyzes the new implementation perspectives for intelligen...

  16. Nde of Advanced Automotive Composite Materials that Apply Ultrasound Infrared Thermography Technique

    Science.gov (United States)

    Choi, Seung-Hyun; Park, Soo-Keun; Kim, Jae-Yeol

    The infrared thermographic nondestructive inspection technique is a quality inspection and stability assessment method used to diagnose the physical characteristics and defects by detecting the infrared ray radiated from the object without destructing it. Recently, the nondestructive inspection and assessment that use the ultrasound-infrared thermography technique are widely adopted in diverse areas. The ultrasound-infrared thermography technique uses the phenomenon that the ultrasound wave incidence to an object with cracks or defects on its mating surface generates local heat on the surface. The car industry increasingly uses composite materials for their lightweight, strength, and environmental resistance. In this study, the car piston passed through the ultrasound-infrared thermography technique for nondestructive testing, among the composite material car parts. This study also examined the effects of the frequency and power to optimize the nondestructive inspection.

  17. Flipped parameter technique applied on source localization in energy constraint sensor arrays

    OpenAIRE

    Pavlović Vlastimir D.; Veličković Zoran S.

    2009-01-01

    In this paper novel flipped parameter technique (FPT) for time delay estimation (TDE) in source localization problem is described. We propose passive source localization technique based on the development of an energy efficient algorithm that can reduce intersensor and interarray communication. We propose a flipped parameter (FP) which can be defined for any sensor in distributed sensor subarrays during the observation period. Unlike classical TDE methods that evaluate cross-correlation funct...

  18. Measurement of the magnitude of force applied by students when learning a mobilisation technique

    OpenAIRE

    Smit, E.; Conradie, M; Wessels, J.; I. Witbooi; Otto, R.

    2003-01-01

    Passive accessory intervertebral movements (PAIVM’s) are frequently used by physiotherapists in the  assessment and management of patients. Studies investigating the reliability of passive mobilisation techniques have shown conflicting results. Therefore, standardisation of PAIVM’s is essential for research and teaching purposes, which could result in better clinical management. In order to standardise graded passive mobilisation techniques, a reliable, easy-to-use, objective measurement tool...

  19. [The applying and foreground of quantifying DNA content by image analysis technique in determining postmortem interval].

    Science.gov (United States)

    Wang, Cheng-yi; Liu, Liang

    2002-02-01

    Image Analysis Technique(IAT) was developed at 1950's, which quantifies the changing all the part of image by sampling, processing, quantifying, computing, analyzing the information of image. And now it has become a normal quantifying technique in biology and medicine research. In the present paper, we reviewed briefly the principium of quantifying the DNA content by IAT, the law of degradation of DNA in nucleus and the foreground of this method in determining PMI in forensic pathology.

  20. Performance values for non destructive assay (NDA) techniques applied to safeguards: the 2002 evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    The first evaluation of NDA performance values undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques (WGNDA) was published in 1993. Almost 10 years later the Working Group decided to review those values, to report about improvements and to issue new performance values for techniques which were not applied in the early nineties, or were at that time only emerging. Non-Destructive Assay techniques have become more and more important in recent years, and they are used to a large extent in nuclear material accountancy and control both by operators and control authorities. As a consequence, the performance evaluation for NDA techniques is of particular relevance to safeguards authorities in optimising Safeguards operations and reducing costs. Performance values are important also for NMAC regulators, to define detection levels, limits for anomalies, goal quantities and to negotiate basic audit rules. This paper presents the latest evaluation of ESARDA Performance Values (EPVs) for the most common NDA techniques currently used for the assay of nuclear materials for Safeguards purposes. The main topics covered by the document are: techniques for plutonium bearing materials: PuO2 and MOX; techniques for U-bearing materials; techniques for U and Pu in liquid form; techniques for spent fuel assay. This issue of the performance values is the result of specific international round robin exercises, field measurements and ad hoc experiments, evaluated and discussed in the ESARDA NDA Working Group. (author)

  1. Non-invasive near-field measurement setup based on modulated scatterer technique applied to microwave tomography

    Science.gov (United States)

    Memarzadeh-Tehran, Hamidreza

    The main focus of this thesis is to address the design and development of a near-field (NF) imaging setup based on the modulated scatterer technique (MST). MST is a well-known approach used in applications where accurate and perturbation-free measurement results are necessary. Of the possible implementations available for making an MST probe, including electrical, optical and mechanical, the optically modulated scatterer OMS was considered in order to provide nearly perturbation-free measurement due to the invisibility of optical fiber to the radio-frequency electromagnetic fields. The OMS probe consists of a commercial, off-the-shelf (COTS) photodiode chip (nonlinear device), a short-dipole antenna acting as a scatterer and a matching network (passive circuit). The latter improves the scattering properties and also increases the sensitivity of the OMS probe within the frequency range in which the matching network is optimized. The radiation characteristics of the probe, including cross-polarization response and omnidirectional sensitivity, were both theoretically and experimentally investigated. Finally, the performance and reliability of the probe was studied by comparing measured near-field distributions on a known field distribution with simulations. Increased imaging speed was obtained using an array of OMS probes, which reduces mechanical movements. Mutual-coupling, switching time and shadowing effect, which all may affect the performance of the array, were investigated. Then, the results obtained by the array were validated in a NF imager by measuring the E-field distribution of an antenna under test (AUT) and comparing it with a simulation. Calibration and data averaging were applied to raw data to compensate the probes for uncertainties in fabrication and interaction between array/AUT and array/receiving antenna. Dynamic range and linearity of the developed NF imager was improved by adding a carrier canceller circuit to the front-end of the receiver. The

  2. Applying data mining techniques to medical time series: an empirical case study in electroencephalography and stabilometry.

    Science.gov (United States)

    Anguera, A; Barreiro, J M; Lara, J A; Lizcano, D

    2016-01-01

    One of the major challenges in the medical domain today is how to exploit the huge amount of data that this field generates. To do this, approaches are required that are capable of discovering knowledge that is useful for decision making in the medical field. Time series are data types that are common in the medical domain and require specialized analysis techniques and tools, especially if the information of interest to specialists is concentrated within particular time series regions, known as events. This research followed the steps specified by the so-called knowledge discovery in databases (KDD) process to discover knowledge from medical time series derived from stabilometric (396 series) and electroencephalographic (200) patient electronic health records (EHR). The view offered in the paper is based on the experience gathered as part of the VIIP project. Knowledge discovery in medical time series has a number of difficulties and implications that are highlighted by illustrating the application of several techniques that cover the entire KDD process through two case studies. This paper illustrates the application of different knowledge discovery techniques for the purposes of classification within the above domains. The accuracy of this application for the two classes considered in each case is 99.86% and 98.11% for epilepsy diagnosis in the electroencephalography (EEG) domain and 99.4% and 99.1% for early-age sports talent classification in the stabilometry domain. The KDD techniques achieve better results than other traditional neural network-based classification techniques. PMID:27293535

  3. Pulsed remote field eddy current technique applied to non-magnetic flat conductive plates

    Science.gov (United States)

    Yang, Binfeng; Zhang, Hui; Zhang, Chao; Zhang, Zhanbin

    2013-12-01

    Non-magnetic metal plates are widely used in aviation and industrial applications. The detection of cracks in thick plate structures, such as multilayered structures of aircraft fuselage, has been challenging in nondestructive evaluation societies. The remote field eddy current (RFEC) technique has shown advantages of deep penetration and high sensitivity to deeply buried anomalies. However, the RFEC technique is mainly used to evaluate ferromagnetic tubes. There are many problems that should be fixed before the expansion and application of this technique for the inspection of non-magnetic conductive plates. In this article, the pulsed remote field eddy current (PRFEC) technique for the detection of defects in non-magnetic conducting plates was investigated. First, the principle of the PRFEC technique was analysed, followed by the analysis of the differences between the detection of defects in ferromagnetic and non-magnetic plain structures. Three different models of the PRFEC probe were simulated using ANSYS. The location of the transition zone, defect detection sensitivity and the ability to detect defects in thick plates using three probes were analysed and compared. The simulation results showed that the probe with a ferrite core had the highest detecting ability. The conclusions derived from the simulation study were also validated by conducting experiments.

  4. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  5. Flipped parameter technique applied on source localization in energy constraint sensor arrays

    Directory of Open Access Journals (Sweden)

    Pavlović Vlastimir D.

    2009-01-01

    Full Text Available In this paper novel flipped parameter technique (FPT for time delay estimation (TDE in source localization problem is described. We propose passive source localization technique based on the development of an energy efficient algorithm that can reduce intersensor and interarray communication. We propose a flipped parameter (FP which can be defined for any sensor in distributed sensor subarrays during the observation period. Unlike classical TDE methods that evaluate cross-correlation function, FPT requires evaluation based upon single sensor signal. The computed cross correlation between a signal and its analytic 'flipped' pair (flipped correlation is a smooth function which peak (time delay can be accurately detected. Flipped parameters are sufficient to determine all differential delays of the signals related to the same source. The flipped parameter technique can be used successfully in two-step methods of passive source localization with significantly less energy in comparison to the classic cross correlation. The use of FPT method is especially significant for the energy constrain distributed sensor subarrays. Using synthetic seismic signals, we illustrate the error of the source localization for classical and proposed method in the presence of noise. We demonstrate the performance improvement in noise environment of the proposed technique in comparison to the classic methods that use real signals. The proposed technique gives accurate results for both coherent and non-coherent signals.

  6. Energy saving techniques applied over a nation-wide mobile network

    DEFF Research Database (Denmark)

    Perez, Eva; Frank, Philipp; Micallef, Gilbert;

    2014-01-01

    Traffic carried over wireless networks has grown significantly in recent years and actual forecasts show that this trend is expected to continue. However, the rapid mobile data explosion and the need for higher data rates comes at a cost of increased complexity and energy consumption of the mobile...... on the energy consumption based on a nation-wide network of a leading European operator. By means of an extensive analysis, we show that with the proposed techniques significant energy savings can be realized....... networks. Although base station equipment is improving its energy efficiency by means of new power amplifiers and increased processing power, additional techniques are required to further reduce the energy consumption. In this paper, we evaluate different energy saving techniques and study their impact...

  7. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    Science.gov (United States)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  8. New Control Technique Applied in Dynamic Voltage Restorer for Voltage Sag Mitigation

    Directory of Open Access Journals (Sweden)

    Rosli Omar

    2010-01-01

    Full Text Available The Dynamic Voltage Restorer (DVR was a power electronics device that was able to compensate voltage sags on critical loads dynamically. The DVR consists of VSC, injection transformers, passive filters and energy storage (lead acid battery. By injecting an appropriate voltage, the DVR restores a voltage waveform and ensures constant load voltage. There were so many types of the control techniques being used in DVR for mitigating voltage sags. The efficiency of the DVR depends on the efficiency of the control technique involved in switching the inverter. Problem statement: Simulation and experimental investigation toward new algorithms development based on SVPWM. Understanding the nature of DVR and performance comparisons between the various controller technologies available. The proposed controller using space vector modulation techniques obtain higher amplitude modulation indexes if compared with conventional SPWM techniques. Moreover, space vector modulation techniques can be easily implemented using digital processors. Space vector PWM can produce about 15% higher output voltage than standard Sinusoidal PWM. Approach: The purpose of this research was to study the implementation of SVPWM in DVR. The proposed control algorithm was investigated through computer simulation by using PSCAD/EMTDC software. Results: From simulation and experimental results showed the effectiveness and efficiency of the proposed controller based on SVPWM in mitigating voltage sags in low voltage distribution systems. It was concluded that its controller also works well both in balance and unbalance conditions of voltages. Conclusion/Recommendations: The simulation and experimental results of a DVR using PSCAD/EMTDC software based on SVPWM technique showed clearly the performance of the DVR in mitigating voltage sags. The DVR operates without any difficulties to inject the appropriate voltage component to correct rapidly any anomaly in the supply voltage to keep the

  9. Effect of the reinforcement bar arrangement on the efficiency of electrochemical chloride removal technique applied to reinforced concrete structures

    Energy Technology Data Exchange (ETDEWEB)

    Garces, P. [Dpto. Ing. de la Construccion, Obras Publicas e Infraestructura Urbana, Universidad de Alicante. Apdo. 99, E-03080 Alicante (Spain)]. E-mail: pedro.garces@ua.es; Sanchez de Rojas, M.J. [Dpto. Ing. de la Construccion, Obras Publicas e Infraestructura Urbana, Universidad de Alicante. Apdo. 99, E-03080 Alicante (Spain); Climent, M.A. [Dpto. Ing. de la Construccion, Obras Publicas e Infraestructura Urbana, Universidad de Alicante. Apdo. 99, E-03080 Alicante (Spain)

    2006-03-15

    This paper reports on the research done to find out the effect that different bar arrangements may have on the efficiency of the electrochemical chloride removal (ECR) technique when applied to a reinforced concrete structural member. Five different types of bar arrangements were considered, corresponding to typical structural members such as columns (with single and double bar reinforcing), slabs, beams and footings. ECR was applied in several steps. We observe that the extraction efficiency depends on the reinforcing bar arrangement. A uniform layer set-up favours chloride extraction. Electrochemical techniques were also used to estimate the reinforcing bar corrosion states, as well as measure the corrosion potential, and instant corrosion rate based on the polarization resistance technique. After ECR treatment, a reduction in the corrosion levels is observed falling short of the depassivation threshold.

  10. U P1, an example for advanced techniques applied to high level activity dismantling

    International Nuclear Information System (INIS)

    The U P1 plant on the CEA Marcoule site was dedicated to the processing of spend fuels from the G1, G2 and G3 plutonium-producing reactors. This plant represents 20.000 m2 of workshops housing about 1000 hot cells. In 1998, a huge program for the dismantling and cleaning-up of the UP1 plant was launched. CEA has developed new techniques to face the complexity of the dismantling operations. These techniques include immersive virtual reality, laser cutting, a specific manipulator arm called MAESTRO and remote handling. (A.C.)

  11. Innovative vibration technique applied to polyurethane foam as a viable substitute for conventional fatigue testing

    Science.gov (United States)

    Peralta, Alexander; Just-Agosto, Frederick; Shafiq, Basir; Serrano, David

    2012-12-01

    Lifetime prediction using three-point bending (TPB) can at times be prohibitively time consuming and costly, whereas vibration testing at higher frequency may potentially save time and revenue. A vibration technique that obtains lifetimes that reasonably match those determined under flexural TPB fatigue is developed. The technique designs the specimen with a procedure based on shape optimization and finite element analysis. When the specimen is vibrated in resonance, a stress pattern that mimics the stress pattern observed under conventional TPB fatigue testing is obtained. The proposed approach was verified with polyurethane foam specimens, resulting in an average error of 4.5% when compared with TPB.

  12. Applied predictive analytics principles and techniques for the professional data analyst

    CERN Document Server

    Abbott, Dean

    2014-01-01

    Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

  13. Development of Characterization Techniques of Thermodynamic and Physical Properties Applied to the CO2-DMSO Mixture

    OpenAIRE

    Calvignac, Brice; Rodier, Elisabeth; Letourneau, Jean-Jacques; Fages, Jacques

    2009-01-01

    International audience This work is focused on the development of new characterization techniques of physical and thermodynamic properties. These techniques have been validated using the binary system DMSO-CO2 for which several studies of characterization have been well documented. We focused on the DMSO-rich phase and we carried out measurements of volumetric expansion, density, viscosity and CO2 solubility at 298.15, 308.15 and 313.15 K and pressures up to 9 MPa. The experimental procedu...

  14. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media

  15. Test techniques: A survey paper on cryogenic tunnels, adaptive wall test sections, and magnetic suspension and balance systems

    Science.gov (United States)

    Kilgore, Robert A.; Dress, David A.; Wolf, Stephen W. D.; Britcher, Colin P.

    1989-01-01

    The ability to get good experimental data in wind tunnels is often compromised by things seemingly beyond our control. Inadequate Reynolds number, wall interference, and support interference are three of the major problems in wind tunnel testing. Techniques for solving these problems are available. Cryogenic wind tunnels solve the problem of low Reynolds number. Adaptive wall test sections can go a long way toward eliminating wall interference. A magnetic suspension and balance system (MSBS) completely eliminates support interference. Cryogenic tunnels, adaptive wall test sections, and MSBS are surveyed. A brief historical overview is given and the present state of development and application in each area is described.

  16. Wavelet Techniques Applied to Modeling Transitional/Turbulent Flows in Turbomachinery

    Science.gov (United States)

    1996-01-01

    Computer simulation is an essential part of the design and development of jet engines for the aeropropulsion industry. Engineers concerned with calculating the flow in jet engine components, such as compressors and turbines, need simple engineering models that accurately describe the complex flow of air and gases and that allow them to quickly estimate loads, losses, temperatures, and other design parameters. In this ongoing collaborative project, advanced wavelet analysis techniques are being used to gain insight into the complex flow phenomena. These insights, which cannot be achieved by commonly used methods, are being used to develop innovative new flow models and to improve existing ones. Wavelet techniques are very suitable for analyzing the complex turbulent and transitional flows pervasive in jet engines. These flows are characterized by intermittency and a multitude of scales. Wavelet analysis results in information about these scales and their locations. The distribution of scales is equivalent to the frequency spectrum provided by commonly used Fourier analysis techniques; however, no localization information is provided by Fourier analysis. In addition, wavelet techniques allow conditional sampling analyses of the individual scales, which is not possible by Fourier methods.

  17. X-ray micro-beam techniques and phase contrast tomography applied to biomaterials

    Science.gov (United States)

    Fratini, Michela; Campi, Gaetano; Bukreeva, Inna; Pelliccia, Daniele; Burghammer, Manfred; Tromba, Giuliana; Cancedda, Ranieri; Mastrogiacomo, Maddalena; Cedola, Alessia

    2015-12-01

    A deeper comprehension of the biomineralization (BM) process is at the basis of tissue engineering and regenerative medicine developments. Several in-vivo and in-vitro studies were dedicated to this purpose via the application of 2D and 3D diagnostic techniques. Here, we develop a new methodology, based on different complementary experimental techniques (X-ray phase contrast tomography, micro-X-ray diffraction and micro-X-ray fluorescence scanning technique) coupled to new analytical tools. A qualitative and quantitative structural investigation, from the atomic to the micrometric length scale, is obtained for engineered bone tissues. The high spatial resolution achieved by X-ray scanning techniques allows us to monitor the bone formation at the first-formed mineral deposit at the organic-mineral interface within a porous scaffold. This work aims at providing a full comprehension of the morphology and functionality of the biomineralization process, which is of key importance for developing new drugs for preventing and healing bone diseases and for the development of bio-inspired materials.

  18. Review of Intelligent Techniques Applied for Classification and Preprocessing of Medical Image Data

    Directory of Open Access Journals (Sweden)

    H S Hota

    2013-01-01

    Full Text Available Medical image data like ECG, EEG and MRI, CT-scan images are the most important way to diagnose disease of human being in precise way and widely used by the physician. Problem can be clearly identified with the help of these medical images. A robust model can classify the medical image data in better way .In this paper intelligent techniques like neural network and fuzzy logic techniques are explored for MRI medical image data to identify tumor in human brain. Also need of preprocessing of medical image data is explored. Classification technique has been used extensively in the field of medical imaging. The conventional method in medical science for medical image data classification is done by human inspection which may result misclassification of data sometime this type of problem identification are impractical for large amounts of data and noisy data, a noisy data may be produced due to some technical fault of the machine or by human errors and can lead misclassification of medical image data. We have collected number of papers based on neural network and fuzzy logic along with hybrid technique to explore the efficiency and robustness of the model for brain MRI data. It has been analyzed that intelligent model along with data preprocessing using principal component analysis (PCA and segmentation may be the competitive model in this domain.

  19. Study of Phase Reconstruction Techniques applied to Smith-Purcell Radiation Measurements

    CERN Document Server

    Delerue, Nicolas; Vieille-Grosjean, Mélissa; Bezshyyko, Oleg; Khodnevych, Vitalii

    2014-01-01

    Measurements of coherent radiation at accelerators typically give the absolute value of the beam profile Fourier transform but not its phase. Phase reconstruction techniques such as Hilbert transform or Kramers Kronig reconstruction are used to recover such phase. We report a study of the performances of these methods and how to optimize the reconstructed profiles.

  20. Study of Phase Reconstruction Techniques applied to Smith-Purcell Radiation Measurements

    CERN Document Server

    Delerue, Nicolas; Bezshyyko, Oleg; Khodnevych, Vitalii

    2015-01-01

    Measurements of coherent radiation at accelerators typically give the absolute value of the beam profile Fourier transform but not its phase. Phase reconstruction techniques such as Hilbert transform or Kramers Kronig reconstruction are used to recover such phase. We report a study of the performances of these methods and how to optimize the reconstructed profiles.

  1. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    Science.gov (United States)

    Warner, Rebecca M.

    2007-01-01

    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  2. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    NARCIS (Netherlands)

    Noordman, J.; Lee, I. van der; Nielen, M.; Vlek, H.; Weijden, T. van der; Dulmen, S. van

    2012-01-01

    Background: Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice

  3. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    NARCIS (Netherlands)

    Noordman, J.; Lee, I. van; Nielen, M.; Vlek, H.; Weijden, T. van; Dulmen, S. van

    2012-01-01

    BACKGROUND: Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice

  4. Applying the Management-by-Objectives Technique in an Industrial Library

    Science.gov (United States)

    Stanton, Robert O.

    1975-01-01

    An experimental "management-by-objectives" performance system was operated by the Libraries and Information Systems Center of Bell Laboratories during 1973. It was found that, though the system was very effective for work planning and the development of people, difficulties were encountered in applying it to certain classes of employees. (Author)

  5. Time-lapse motion picture technique applied to the study of geological processes

    Science.gov (United States)

    Miller, R.D.; Crandell, D.R.

    1959-01-01

    Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.

  6. Adapting desorption mass spectrometry and pattern recognition techniques to petroleum fluid correlation studies

    Energy Technology Data Exchange (ETDEWEB)

    Hickey, J.C.; Durfee, S.L.

    1987-05-01

    Petroleum explorationists are often faced with determining the relationship between the products of wells completed in lithologies that may have some spatial or communicative relationship. Conventional methods of sampling and analysis are often time consuming and expensive. A new method for the sampling, analysis, and computerized data interpretation of the C2-C16 fraction of crude oil and natural gas is reported here. Controlled temperature headspace sampling of crude oils and direct pressure equilibrated natural gas exposure of carbon adsorption wires has been successfully applied to the sampling of the volatile fractions of petroleum fluids. Thermal vacuum desorption followed by mass spectrometric analysis of these volatile organic compounds is a rapid and sensitive method for obtaining detailed information of the distribution (fingerprint) of the components in a given sample; however, the resulting information is too complex for direct human interpretation. Techniques of computerized chemical pattern recognition such as principal components analysis (PCA) with graphical rotation, discriminant analysis, and similarity analysis (SIMCA) have proven useful in establishing the relationships between potentially correlated samples via the fingerprints of their volatile fractions. Studies have been conducted on multiple samples from numerous continental basins. The results of several of these studies will be presented to demonstrate the applicability of this new, rapid, cost-efficient approach to correlation studies.

  7. Magnetic resonance techniques applied to the diagnosis and treatment of Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Benito eDe Celis Alonso

    2015-07-01

    Full Text Available Parkinson’s disease affects at least 10 million people worldwide. It is a neurodegenerative disease which is currently diagnosed by neurological examination. No neuroimaging investigation or blood biomarker is available to aid diagnosis and prognosis. Most effort toward diagnosis using magnetic resonance has been focused on the use of structural/anatomical neuroimaging and diffusion tensor imaging. However, deep brain stimulation, a current strategy for treating Parkinson’s disease, is guided by magnetic resonance imaging. For clinical prognosis, diagnosis and follow-up investigations, blood oxygen level–dependent magnetic resonance imaging, diffusion tensor imaging, spectroscopy and transcranial magnetic stimulation have been used. These techniques represent the state of the art in the last five years. Here, we focus on magnetic resonance techniques for the diagnosis and treatment of Parkinson’s disease.

  8. Modelling laser speckle photographs of decayed teeth by applying a digital image information technique

    Science.gov (United States)

    Ansari, M. Z.; da Silva, L. C.; da Silva, J. V. P.; Deana, A. M.

    2016-09-01

    We report on the application of a digital image model to assess early carious lesions on teeth. When decay is in its early stages, the lesions were illuminated with a laser and the laser speckle images were obtained. Due to the differences in the optical properties between healthy and carious tissue, both regions produced different scatter patterns. The digital image information technique allowed us to produce colour-coded 3D surface plots of the intensity information in the speckle images, where the height (on the z-axis) and the colour in the rendering correlate with the intensity of a pixel in the image. The quantitative changes in colour component density enhance the contrast between the decayed and sound tissue, and visualization of the carious lesions become significantly evident. Therefore, the proposed technique may be adopted in the early diagnosis of carious lesions.

  9. Magnetic Resonance Techniques Applied to the Diagnosis and Treatment of Parkinson’s Disease

    Science.gov (United States)

    de Celis Alonso, Benito; Hidalgo-Tobón, Silvia S.; Menéndez-González, Manuel; Salas-Pacheco, José; Arias-Carrión, Oscar

    2015-01-01

    Parkinson’s disease (PD) affects at least 10 million people worldwide. It is a neurodegenerative disease, which is currently diagnosed by neurological examination. No neuroimaging investigation or blood biomarker is available to aid diagnosis and prognosis. Most effort toward diagnosis using magnetic resonance (MR) has been focused on the use of structural/anatomical neuroimaging and diffusion tensor imaging (DTI). However, deep brain stimulation, a current strategy for treating PD, is guided by MR imaging (MRI). For clinical prognosis, diagnosis, and follow-up investigations, blood oxygen level-dependent MRI, DTI, spectroscopy, and transcranial magnetic stimulation have been used. These techniques represent the state of the art in the last 5 years. Here, we focus on MR techniques for the diagnosis and treatment of Parkinson’s disease. PMID:26191037

  10. Applying Data Mining Technique For The Optimal Usage Of Neonatal Incubator

    Directory of Open Access Journals (Sweden)

    Hagar Fady

    2012-07-01

    Full Text Available This research aims to provide intelligent tool to predict incubator Length of Stay (LOS of infants which shall increase the utilization and management of infant incubators. The data sets of Egyptian Neonatal Network (EGNN were employed and Oracle Data Miner (ODM tool was used for the analysis and prediction of data. The obtained results indicated that data mining technique is an appropriate and sufficiently sensitive method to predict required LOS of premature and ill infant.

  11. The radiation techniques of tomotherapy & intensity-modulated radiation therapy applied to lung cancer

    OpenAIRE

    Zhu, Zhengfei; Fu, Xiaolong

    2015-01-01

    Radiotherapy (RT) plays an important role in the management of lung cancer. Development of radiation techniques is a possible way to improve the effect of RT by reducing toxicities through better sparing the surrounding normal tissues. This article will review the application of two forms of intensity-modulated radiation therapy (IMRT), fixed-field IMRT and helical tomotherapy (HT) in lung cancer, including dosimetric and clinical studies. The advantages and potential disadvantages of these t...

  12. Vibrational techniques applied to photosynthesis: Resonance Raman and fluorescence line-narrowing.

    Science.gov (United States)

    Gall, Andrew; Pascal, Andrew A; Robert, Bruno

    2015-01-01

    Resonance Raman spectroscopy may yield precise information on the conformation of, and the interactions assumed by, the chromophores involved in the first steps of the photosynthetic process. Selectivity is achieved via resonance with the absorption transition of the chromophore of interest. Fluorescence line-narrowing spectroscopy is a complementary technique, in that it provides the same level of information (structure, conformation, interactions), but in this case for the emitting pigment(s) only (whether isolated or in an ensemble of interacting chromophores). The selectivity provided by these vibrational techniques allows for the analysis of pigment molecules not only when they are isolated in solvents, but also when embedded in soluble or membrane proteins and even, as shown recently, in vivo. They can be used, for instance, to relate the electronic properties of these pigment molecules to their structure and/or the physical properties of their environment. These techniques are even able to follow subtle changes in chromophore conformation associated with regulatory processes. After a short introduction to the physical principles that govern resonance Raman and fluorescence line-narrowing spectroscopies, the information content of the vibrational spectra of chlorophyll and carotenoid molecules is described in this article, together with the experiments which helped in determining which structural parameter(s) each vibrational band is sensitive to. A selection of applications is then presented, in order to illustrate how these techniques have been used in the field of photosynthesis, and what type of information has been obtained. This article is part of a Special Issue entitled: Vibrational spectroscopies and bioenergetic systems. PMID:25268562

  13. Geostatistical techniques applied to mapping limnological variables and quantify the uncertainty associated with estimates

    Directory of Open Access Journals (Sweden)

    Cristiano Cigagna

    2015-12-01

    Full Text Available Abstract Aim: This study aimed to map the concentrations of limnological variables in a reservoir employing semivariogram geostatistical techniques and Kriging estimates for unsampled locations, as well as the uncertainty calculation associated with the estimates. Methods: We established twenty-seven points distributed in a regular mesh for sampling. Then it was determined the concentrations of chlorophyll-a, total nitrogen and total phosphorus. Subsequently, a spatial variability analysis was performed and the semivariogram function was modeled for all variables and the variographic mathematical models were established. The main geostatistical estimation technique was the ordinary Kriging. The work was developed with the estimate of a heavy grid points for each variables that formed the basis of the interpolated maps. Results: Through the semivariogram analysis was possible to identify the random component as not significant for the estimation process of chlorophyll-a, and as significant for total nitrogen and total phosphorus. Geostatistical maps were produced from the Kriging for each variable and the respective standard deviations of the estimates calculated. These measurements allowed us to map the concentrations of limnological variables throughout the reservoir. The calculation of standard deviations provided the quality of the estimates and, consequently, the reliability of the final product. Conclusions: The use of the Kriging statistical technique to estimate heavy mesh points associated with the error dispersion (standard deviation of the estimate, made it possible to make quality and reliable maps of the estimated variables. Concentrations of limnological variables in general were higher in the lacustrine zone and decreased towards the riverine zone. The chlorophyll-a and total nitrogen correlated comparing the grid generated by Kriging. Although the use of Kriging is more laborious compared to other interpolation methods, this

  14. Improving throughput and user experience for information intensive websites by applying HTTP compression technique.

    Science.gov (United States)

    Malla, Ratnakar

    2008-11-06

    HTTP compression is a technique specified as part of the W3C HTTP 1.0 standard. It allows HTTP servers to take advantage of GZIP compression technology that is built into latest browsers. A brief survey of medical informatics websites show that compression is not enabled. With compression enabled, downloaded files sizes are reduced by more than 50% and typical transaction time is also reduced from 20 to 8 minutes, thus providing a better user experience.

  15. Impact of adaptive proactive reconfiguration technique on Vmin and lifetime of SRAM caches

    OpenAIRE

    Pouyan, Peyman; Amat Bertran, Esteve; Barajas Ojeda, Enrique; Rubio Sola, Jose Antonio

    2014-01-01

    This work presents a test and measurement technique to monitor aging and process variation status of SRAM cells as an aging-aware design technique. We have then verified our technique with an implemented chip. The obtained aging information are utilized to guide our proactive strategies, and to track the impact of aging in new reconfiguration techniques for cache memory structures. Our proactive techniques improve the reliability, extend the SRAMs lifetime, and reduce the Vmin drift in presen...

  16. Applying the sterile insect technique to the control of insect pests

    International Nuclear Information System (INIS)

    The sterile insect technique involves the mass-rearing of insects, which are sterilized by gamma rays from a 60Co source before being released in a controlled fashion into nature. Matings between the sterile insects released and native insects produce no progeny, and so if enough of these matings occur the pest population can be controlled or even eradicated. A modification of the technique, especially suitable for the suppression of the moths and butterflies, is called the F, or inherited sterility method. In this, lower radiation doses are used such that the released males are only partially sterile (30-60%) and the females are fully sterile. When released males mate with native females some progeny are produced, but they are completely sterile. Thus, full expression of the sterility is delayed by one generation. This article describes the use of the sterile insect technique in controlling the screwworm fly, the tsetse fly, the medfly, the pink bollworm and the melon fly, and of the F1 sterility method in the eradication of local gypsy moth infestations. 18 refs, 5 figs, 1 tab

  17. Quantification of material slippage in the iliotibial tract when applying the partial plastination clamping technique.

    Science.gov (United States)

    Sichting, Freddy; Steinke, Hanno; Wagner, Martin F-X; Fritsch, Sebastian; Hädrich, Carsten; Hammer, Niels

    2015-09-01

    The objective of this study was to evaluate the potential of the partial plastination technique in minimizing material slippage and to discuss the effects on the tensile properties of thin dense connective tissue. The ends of twelve iliotibial tract samples were primed with polyurethane resin and covered by plastic plates to provide sufficient grip between the clamps. The central part of the samples remained in an anatomically unfixed condition. Strain data of twelve partially plastinated samples and ten samples in a completely anatomically unfixed state were obtained using uniaxial crosshead displacement and an optical image tracking technique. Testing of agreement between the strain data revealed ongoing but markedly reduced material slippage in partially plastinated samples compared to the unfixed samples. The mean measurement error introduced by material slippage was up to 18.0% in partially plastinated samples. These findings might complement existing data on measurement errors during material testing and highlight the importance of individual quantitative evaluation of errors that come along with self-made clamping techniques. PMID:26005842

  18. IPR techniques applied to a multimedia environment in the HYPERMEDIA project

    Science.gov (United States)

    Munoz, Alberto; Ribagorda, Arturo; Sierra, Jose M.

    1999-04-01

    Watermarking techniques have been proved as a good method to protect intellectual copyrights in digital formats. But the simplicity for processing information supplied by digital platforms also offers many chances for eliminating marks embedded in the data due to the wide variety of techniques to modify information in digital formats. This paper analyzes a selection of the most interesting methods for image watermarking in order to test its qualities. The comparison of these watermarking techniques has shown new interesting lines of work. Some changes and extensions to these methods are proposed to increase its robustness against some usual attacks and specific watermark attacks. This works has been realized in order to provide the HYPERMEDIA project with an efficient tool for protecting IPR. The objective of this project is to establish an experimental stage on continuous multimedia material (audiovisuals) handling and delivering in a multimedia service environment, allowing the user to navigate in the hyperspace through database which belong to actors of the service chain and protecting IPR of authors or owners.

  19. Applying Data-mining techniques to study drought periods in Spain

    Science.gov (United States)

    Belda, F.; Penades, M. C.

    2010-09-01

    Data-mining is a technique that it can be used to interact with large databases and to help in the discovery relations between parameters by extracting information from massive and multiple data archives. Drought affects many economic and social sectors, from agricultural to transportation, going through urban water deficit and the development of modern industries. With these problems and drought geographical and temporal distribution it's difficult to find a single definition of drought. Improving the understanding of the knowledge of climatic index is necessary to reduce the impacts of drought and to facilitate quick decisions regarding this problem. The main objective is to analyze drought periods from 1950 to 2009 in Spain. We use several kinds of information, different formats, sources and transmission mode. We use satellite-based Vegetation Index, dryness index for several temporal periods. We use daily and monthly precipitation and temperature data and soil moisture data from numerical weather model. We calculate mainly Standardized Precipitation Index (SPI) that it has been used amply in the bibliography. We use OLAP-Mining techniques to discovery of association rules between remote-sensing, numerical weather model and climatic index. Time series Data- Mining techniques organize data as a sequence of events, with each event having a time of recurrence, to cluster the data into groups of records or cluster with similar characteristics. Prior climatological classification is necessary if we want to study drought periods over all Spain.

  20. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  1. A Reinforcement Plate for Partially Thinned Pressure Vessel Designed to Measure the Thickness of Vessel Wall Applying Ultrasonic Technique

    International Nuclear Information System (INIS)

    It is very hard to preserve the wall thickness of the vessel because of the erosion or corrosion as time goes by. Therefore, the wall thicknesses of heaters in power plants are periodically measured using ultrasonic test. If the integrity of the wall thickness is estimated not to secure, the reinforcement plate is welled on the thinned area of the vessel. The overlay weld of the reinforcement plate on the thinned vessel is normally the fillet welding. As shown by the references, the reinforcement plate with adequate thickness does its role very well before the vessel wall is perforated due to thinning. However, the integrity of shell cannot insure because the weldment is directly applied by the shell side pressure to after the vessel wall is perforated. Therefore, it is needed to measure the thickness of thinned area under the reinforcement plate continuously for preserving integrity and planning the fabrication of replacement vessel. It is impossible to apply the ultrasonic thickness measurement technique after the reinforcement plate is welded on the shell. In this paper new reinforcement plate, which makes it possible to measure the wall thickness under the reinforcement plate applying the ultrasonic technique, is introduced. A method to evaluate the structural integrity of a fillet weldment for the reinforcement plate welded on a pressure vessel is introduced in this paper. Moreover, new reinforcement plate, which makes it possible to measure the wall thickness of pressure vessels under the reinforcement plate applying the ultrasonic technique, is introduced

  2. Towards Applying Text Mining Techniques on Software Quality Standards and Models

    OpenAIRE

    Kelemen, Zádor Dániel; Kusters, Rob; Trienekens, Jos; Balla, Katalin

    2013-01-01

    Many of quality approaches are described in hundreds of textual pages. Manual processing of information consumes plenty of resources. In this report we present a text mining approach applied on CMMI, one well known and widely known quality approach. The text mining analysis can provide a quick overview on the scope of a quality approaches. The result of the analysis could accelerate the understanding and the selection of quality approaches.

  3. An Adaptive Clutter Suppression Technique for Moving Target Detector in Pulse Doppler Radar

    Directory of Open Access Journals (Sweden)

    A. Mandal

    2014-04-01

    Full Text Available An adaptive system performs the processing by using an architecture having time-varying parameters on the received signals which accompanies with clutters. In this paper, an adaptive moving target detector has been designed to meet the challenges of target detection amidst various levels of clutter environments. The approach has been used that is able to overcome the inherent limitations of conventional systems (e.g. Moving Target Indicator, Fast Fourier Transform etc. having predefined coefficients. In this purpose an optimal design of transversal filter is being proposed along with various weight selection Maps to improve probability of detection in ground based surveillance radar. A modified LMS algorithm based adaptive FIR filter has been implemented utilizing modular CORDIC unit as a main processing element for filtering as well as weight updatation to suppress clutter of various intensity. Extensive MATLAB simulations have been done using various levels of clutter input to show the effectiveness of adaptive moving target detector (AMTD.

  4. Assessment of ground-based monitoring techniques applied to landslide investigations

    Science.gov (United States)

    Uhlemann, S.; Smith, A.; Chambers, J.; Dixon, N.; Dijkstra, T.; Haslam, E.; Meldrum, P.; Merritt, A.; Gunn, D.; Mackay, J.

    2016-01-01

    A landslide complex in the Whitby Mudstone Formation at Hollin Hill, North Yorkshire, UK is periodically re-activated in response to rainfall-induced pore-water pressure fluctuations. This paper compares long-term measurements (i.e., 2009-2014) obtained from a combination of monitoring techniques that have been employed together for the first time on an active landslide. The results highlight the relative performance of the different techniques, and can provide guidance for researchers and practitioners for selecting and installing appropriate monitoring techniques to assess unstable slopes. Particular attention is given to the spatial and temporal resolutions offered by the different approaches that include: Real Time Kinematic-GPS (RTK-GPS) monitoring of a ground surface marker array, conventional inclinometers, Shape Acceleration Arrays (SAA), tilt meters, active waveguides with Acoustic Emission (AE) monitoring, and piezometers. High spatial resolution information has allowed locating areas of stability and instability across a large slope. This has enabled identification of areas where further monitoring efforts should be focused. High temporal resolution information allowed the capture of 'S'-shaped slope displacement-time behaviour (i.e. phases of slope acceleration, deceleration and stability) in response to elevations in pore-water pressures. This study shows that a well-balanced suite of monitoring techniques that provides high temporal and spatial resolutions on both measurement and slope scale is necessary to fully understand failure and movement mechanisms of slopes. In the case of the Hollin Hill landslide it enabled detailed interpretation of the geomorphological processes governing landslide activity. It highlights the benefit of regularly surveying a network of GPS markers to determine areas for installation of movement monitoring techniques that offer higher resolution both temporally and spatially. The small sensitivity of tilt meter measurements

  5. ADAPTING E-COURSES USING DATA MINING TECHNIQUES - PDCA APPROACH AND QUALITY SPIRAL

    OpenAIRE

    Marija Blagojevic; Zivadin Micic

    2013-01-01

    This paper presents an approach to adapting e-courses based on original PDCA (Plan, Do, Check , Act) platform and quality spiral. An algorithm for the adaptation of e-courses was proposed and implemented into the Moodle Learning Management System at the Faculty of Technical Sciences, Cacak. The approach is primarily based on improving LMS (Learning Management Systems) or e-learning systems through modifying the electronic structure of the courses by predicting the behaviour patterns of the us...

  6. Electrical hand tools and techniques: A compilation. [utilization of space technology for tools and adapters

    Science.gov (United States)

    1974-01-01

    Space technology utilization for developing tools, adapters, and fixtures and procedures for assembling, installing, and servicing electrical components and equipment are discussed. Some of the items considered are: (1) pivotal screwdriver, (2) termination locator tool for shielded cables, (3) solder application tools, (4) insulation and shield removing tool, and (5) torque wrench adapter for cable connector engaging ring. Diagrams of the various tools and devices are provided.

  7. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-07-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  8. Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow

    Energy Technology Data Exchange (ETDEWEB)

    Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1996-12-31

    We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.

  9. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    International Nuclear Information System (INIS)

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  10. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form.

    Science.gov (United States)

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2016-02-01

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories.

  11. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form

    Science.gov (United States)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-02-01

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories.

  12. Evaluation of Bending Strength in Friction Welded Alumina/mild Steel Joints by Applying Factorial Technique

    Science.gov (United States)

    Jesudoss Hynes, N. Rajesh; Nagaraj, P.; Vivek Prabhu, M.

    Joining of metal with ceramics has become significant in many applications, because they combine properties like ductility with high hardness and wear resistance. By friction welding technique, alumina can be joined to mild steel with AA1100 sheet of 1mm thickness as interlayer. In the present work, investigation of the effect of friction time on interlayer thickness reduction and bending strength is carried out by factorial design. By using ANOVA, a statistical tool, regression modeling is done. The regression model predicts the bending strength of welded ceramic/metal joints accurately with ± 2% deviation from the experimental values.

  13. Full-field speckle correlation technique as applied to blood flow monitoring

    Science.gov (United States)

    Vilensky, M. A.; Agafonov, D. N.; Timoshina, P. A.; Shipovskaya, O. V.; Zimnyakov, D. A.; Tuchin, V. V.; Novikov, P. A.

    2011-03-01

    The results of experimental study of monitoring the microcirculation in tissue superficial layers of the internal organs at gastro-duodenal hemorrhage with the use of laser speckles contrast analysis technique are presented. The microcirculation monitoring was provided in the course of the laparotomy of rat abdominal cavity in the real time. Microscopic hemodynamics was analyzed for small intestine and stomach under different conditions (normal state, provoked ischemia, administration of vasodilative agents such as papaverine, lidocaine). The prospects and problems of internal monitoring of micro-vascular flow in clinical conditions are discussed.

  14. Photon Counting Optical Time Domain Reflectometry Applying a Single Photon Modulation Technique

    Institute of Scientific and Technical Information of China (English)

    WANG Xiao-Bo; WANG Jing-Jing; HE Bo; XIAO Lian-Tuan; JIA Suo-Tang

    2011-01-01

    Photon-counting optical time domain reflectometry (v-OTDR) is typically used in a mode with spatial resolution in the centimeter range.Here we demonstrate a 1550 nm v-OTDR system to optimize the discriminate voltage of a single photon avalanche detector using a single photon modulation and demodulation technique,which shows obvious improvement in the signal intensity.The intensity of signal is doubled when the discriminator voltage is optimized from 184mV to 162mV.

  15. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    Science.gov (United States)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  16. Nuclear analytical techniques applied to characterization of atmospheric aerosols in Amazon Region

    International Nuclear Information System (INIS)

    This work presents the atmospheric aerosols characterization that exist in different regions of Amazon basin. The biogenic aerosol emission by forest, as well as the atmospheric emissions of particulate materials due to biomass burning, were analyzed. Samples of aerosol particles were collected during three years in two different locations of Amazon region using Stacked Unit Filters. In order to study these samples some analytical nuclear techniques were used. The high concentrations of aerosols as a result of biomass burning process were observed in the period of june-september

  17. Applying Intelligent Computing Techniques to Modeling Biological Networks from Expression Data

    Institute of Scientific and Technical Information of China (English)

    Wei-Po Lee; Kung-Cheng Yang

    2008-01-01

    Constructing biological networks is one of the most important issues in system sbiology. However, constructing a network from data manually takes a considerable large amount of time, therefore an automated procedure is advocated. To automate the procedure of network construction, in this work we use two intelligent computing techniques, genetic programming and neural computation, to infer two kinds of network models that use continuous variables. To verify the presented approaches, experiments have been conducted and the preliminary results show that both approaches can be used to infer networks successfully.

  18. Fragrance composition of Dendrophylax lindenii (Orchidaceae using a novel technique applied in situ

    Directory of Open Access Journals (Sweden)

    James J. Sadler

    2012-02-01

    Full Text Available The ghost orchid, Dendrophylax lindenii (Lindley Bentham ex Rolfe (Orchidaceae, is one of North America’s rarest and well-known orchids. Native to Cuba and SW Florida where it frequents shaded swamps as an epiphyte, the species has experienced steady decline. Little information exists on D. lindenii’s biology in situ, raising conservation concerns. During the summer of 2009 at an undisclosed population in Collier County, FL, a substantial number (ca. 13 of plants initiated anthesis offering a unique opportunity to study this species in situ. We report a new technique aimed at capturing floral headspace of D. lindenii in situ, and identified volatile compounds using gas chromatography mass spectrometry (GC/MS. All components of the floral scent were identified as terpenoids with the exception of methyl salicylate. The most abundant compound was the sesquiterpene (E,E-α-farnesene (71% followed by (E-β-ocimene (9% and methyl salicylate (8%. Other compounds were: linalool (5%, sabinene (4%, (E-α-bergamotene (2%, α-pinene (1%, and 3-carene (1%. Interestingly, (E,E-α-farnesene has previously been associated with pestiferous insects (e.g., Hemiptera. The other compounds are common floral scent constituents in other angiosperms suggesting that our in situ technique was effective. Volatile capture was, therefore, possible without imposing physical harm (e.g., inflorescence detachment to this rare orchid.

  19. A New Astrometric Technique Applied to the Likely Tidal Disruption Event, Swift J166+57

    Science.gov (United States)

    Alianora Hounsell, Rebekah; Fruchter, Andrew S.; Levan, Andrew J.

    2015-01-01

    We have developed a new technique to align Hubble Space Telescope (HST) data using background galaxies as astrometric markers. This technique involves the cross correlation of cutouts of regions about individual galaxies from different epochs, enabling the determination of an astrometric solution. The method avoids errors introduced by proper motion when the locations of stars are used to transform the images. We have used this approach to investigate the nature of the unusual gamma-ray source Sw J1644+57, which was initially classified as a long gamma ray burst (LGRB). However, due to the object's atypical behavior in the X-ray and optical, along with its location within the host (150 ± 150 pc, see Levan et al. 2011) it has been suggested that the transient may be caused by a tidal disruption event (TDE). Additional theories have also been suggested for its origin which remain based on the collapsar model for a long burst, such as the collapse of a red giant, rather than a stripped star as is typical in LGRBs, or the creation of a magnetar.Precise astrometry of the transient with respect to the galaxy can potentially distinguish between these scenarios. Here we show that our method of alignment dramatically reduces the astrometric error of the position of the transient with respect to the nucleus of the host. We therefore discuss the implication of our result on the astrophysical nature of the object.

  20. Therapeutic techniques applied in the heavy-ion therapy at IMP

    Science.gov (United States)

    Li, Qiang; Sihver, Lembit

    2011-04-01

    Superficially-placed tumors have been treated with carbon ions at the Institute of Modern Physics (IMP), Chinese Academy of Sciences (CAS), since November 2006. Up to now, 103 patients have been irradiated in the therapy terminal of the heavy ion research facility in Lanzhou (HIRFL) at IMP, where carbon-ion beams with energies up to 100 MeV/u can be supplied and a passive beam delivery system has been developed and commissioned. A number of therapeutic and clinical experiences concerning heavy-ion therapy have been acquired at IMP. To extend the heavy-ion therapy project to deep-seated tumor treatment, a horizontal beam line dedicated to this has been constructed in the cooling storage ring (CSR), which is a synchrotron connected to the HIRFL as an injector, and is now in operation. Therapeutic high-energy carbon-ion beams, extracted from the HIRFL-CSR through slow extraction techniques, have been supplied in the deep-seated tumor therapy terminal. After the beam delivery, shaping and monitoring devices installed in the therapy terminal at HIRFL-CSR were validated through therapeutic beam tests, deep-seated tumor treatment with high-energy carbon ions started in March 2009. The therapeutic techniques in terms of beam delivery system, conformal irradiation method and treatment planning used at IMP are introduced in this paper.

  1. Hyphenated GC-FTIR and GC-MS techniques applied in the analysis of bioactive compounds

    Science.gov (United States)

    Gosav, Steluta; Paduraru, Nicoleta; Praisler, Mirela

    2014-08-01

    The drugs of abuse, which affect human nature and cause numerous crimes, have become a serious problem throughout the world. There are hundreds of amphetamine analogues on the black market. They consist of various alterations of the basic amphetamine molecular structure, which are yet not yet included in the lists of forbidden compounds although they retain or slightly modify the hallucinogenic effects of their parent compound. It is their important variety that makes their identification quite a challenge. A number of analytical procedures for the identification of amphetamines and their analogues have recently been reported. We are presenting the profile of the main hallucinogenic amphetamines obtained with the hyphenated techniques that are recommended for the identification of illicit amphetamines, i. e. gas chromatography combined with mass spectrometry (GC-MS) and gas chromatography coupled with Fourier transform infrared spectrometry (GC-FTIR). The infrared spectra of the analyzed hallucinogenic amphetamines present some absorption bands (1490 cm-1, 1440 cm-1, 1245 cm-1, 1050 cm-1 and 940 cm-1) that are very stable as position and shape, while their intensity depends of the side-chain substitution. The specific ionic fragment of the studied hallucinogenic compounds is the 3,4-methylenedioxybenzyl cation (m/e = 135) which has a small relative abundance (lesser than 20%). The complementarity of the above mentioned techniques for the identification of hallucinogenic compounds is discussed.

  2. Predicting Performance of Schools by Applying Data Mining Techniques on Public Examination Results

    Directory of Open Access Journals (Sweden)

    J. Macklin Abraham Navamani

    2015-02-01

    Full Text Available This study work presents a systematic analysis of various features of the higher grade school public examination results data in the state of Tamil Nadu, India through different data mining classification algorithms to predict the performance of Schools. Nowadays the parents always targets to select the right city, school and factors which contributes to the success of the results in schools of their children. There could be possible effects of factors such as Ethnic mix, Medium of study, geography could make a difference in results. The proposed work would focus on two fold factors namely Machine Learning algorithms to predict School performance with satisfying accuracy and to evaluate the data mining technique which would give better accuracy of the learning algorithms. It was found that there exist some apparent and some less noticeable attributes that demonstrate a strong correlation with student performance. Data were collected through the credible source data preparation and correlation analysis. The findings revealed that the public examinations results data was a very helpful predictor of performance of school in order to improve the result with maximum level and also improved the overall accuracy with the help of Adaboost technique.

  3. Applying Tiab’s direct synthesis technique to dilatant non-Newtonian/Newtonian fluids

    Directory of Open Access Journals (Sweden)

    Javier Andrés Martínez

    2011-08-01

    Full Text Available Non-Newtonian fluids, such as polymer solutions, have been used by the oil industry for many years as fracturing agents and drilling mud. These solutions, which normally include thickened water and jelled fluids, are injected into the formation to enhanced oil recovery by improving sweep efficiency. It is worth noting that some heavy oils behave non-Newtonianly. Non-Newtonian fluids do not have direct proportionality between applied shear stress and shear rate and viscosity varies with shear rate depending on whether the fluid is either pseudoplastic or dilatant. Viscosity decreases as shear rate increases for the former whilst the reverse takes place for dilatants. Mathematical models of conventional fluids thus fail when applied to non-Newtonian fluids. The pressure derivative curve is introduced in this descriptive work for a dilatant fluid and its pattern was observed. Tiab’s direct synthesis (TDS methodology was used as a tool for interpreting pressure transient data to estimate effective permeability, skin factors and non-Newtonian bank radius. The methodology was successfully verified by its application to synthetic examples. Also, comparing it to pseudoplastic behavior, it was found that the radial flow regime in the Newtonian zone of dilatant fluids took longer to form regarding both the flow behavior index and consistency factor.

  4. Background Noise Reduction in Wind Tunnels using Adaptive Noise Cancellation and Cepstral Echo Removal Techniques for Microphone Array Applications

    OpenAIRE

    Spalt, Taylor B

    2010-01-01

    Two experiments were conducted to investigate Adaptive Noise Cancelling and Cepstrum echo removal post-processing techniques on acoustic data from a linear microphone array in an anechoic chamber. A point source speaker driven with white noise was used as the primary signal. The first experiment included a background speaker to provide interference noise at three different Signal-to-Noise Ratios to simulate noise propagating down a wind tunnel circuit. The second experiment contained only the...

  5. Radiation treatment for the right naris in a pediatric anesthesia patient using an adaptive oral airway technique

    Energy Technology Data Exchange (ETDEWEB)

    Sponseller, Patricia, E-mail: sponselp@uw.edu; Pelly, Nicole; Trister, Andrew; Ford, Eric; Ermoian, Ralph

    2015-10-01

    Radiation therapy for pediatric patients often includes the use of intravenous anesthesia with supplemental oxygen delivered via the nasal cannula. Here, we describe the use of an adaptive anesthesia technique for electron irradiation of the right naris in a preschool-aged patient treated under anesthesia. The need for an intranasal bolus plug precluded the use of standard oxygen supplementation. This novel technique required the multidisciplinary expertise of anesthesiologists, radiation therapists, medical dosimetrists, medical physicists, and radiation oncologists to ensure a safe and reproducible treatment course.

  6. Mass Movement Hazards in the Mediterranean; A review on applied techniques and methodologies

    Science.gov (United States)

    Ziade, R.; Abdallah, C.; Baghdadi, N.

    2012-04-01

    Emergent population and expansions of settlements and life-lines over hazardous areas in the Mediterranean region have largely increased the impact of Mass Movements (MM) both in industrialized and developing countries. This trend is expected to continue in the next decades due to increased urbanization and development, continued deforestation and increased regional precipitation in MM-prone areas due to changing climatic patterns. Consequently, and over the past few years, monitoring of MM has acquired great importance from the scientific community as well as the civilian one. This article begins with a discussion of the MM classification, and the different topographic, geologic, hydrologic and environmental impacting factors. The intrinsic (preconditioning) variables determine the susceptibility of MM and extrinsic factors (triggering) can induce the probability of MM occurrence. The evolution of slope instability studies is charted from geodetic or observational techniques, to geotechnical field-based origins to recent higher levels of data acquisition through Remote Sensing (RS) and Geographic Information System (GIS) techniques. Since MM detection and zoning is difficult in remote areas, RS and GIS have enabled regional studies to predominate over site-based ones where they provide multi-temporal images hence facilitate greatly MM monitoring. The unusual extent of the spectrum of MM makes it difficult to define a single methodology to establish MM hazard. Since the probability of occurrence of MM is one of the key components in making rational decisions for management of MM risk, scientists and engineers have developed physical parameters, equations and environmental process models that can be used as assessment tools for management, education, planning and legislative purposes. Assessment of MM is attained through various modeling approaches mainly divided into three main sections: quantitative/Heuristic (1:2.000-1:10.000), semi-quantitative/Statistical (1

  7. A comparison of new, old and future densiometic techniques as applied to volcanologic study.

    Science.gov (United States)

    Pankhurst, Matthew; Moreland, William; Dobson, Kate; Þórðarson, Þorvaldur; Fitton, Godfrey; Lee, Peter

    2015-04-01

    The density of any material imposes a primary control upon its potential or actual physical behaviour in relation to its surrounds. It follows that a thorough understanding of the physical behaviour of dynamic, multi-component systems, such as active volcanoes, requires knowledge of the density of each component. If we are to accurately predict the physical behaviour of synthesized or natural volcanic systems, quantitative densiometric measurements are vital. The theoretical density of melt, crystals and bubble phases may be calculated using composition, structure, temperature and pressure inputs. However, measuring the density of natural, non-ideal, poly-phase materials remains problematic, especially if phase specific measurement is important. Here we compare three methods; Archimedes principle, He-displacement pycnometry and X-ray micro computed tomography (XMT) and discuss the utility and drawbacks of each in the context of modern volcanologic study. We have measured tephra, ash and lava from the 934 AD Eldgjá eruption (Iceland), and the 2010 AD Eyjafjallajökull eruption (Iceland), using each technique. These samples exhibit a range of particle sizes, phases and textures. We find that while the Archimedes method remains a useful, low-cost technique to generate whole-rock density data, relative precision is problematic at small particles sizes. Pycnometry offers a more precise whole-rock density value, at a comparable cost-per-sample. However, this technique is based upon the assumption pore spaces within the sample are equally available for gas exchange, which may or may not be the case. XMT produces 3D images, at resolutions from nm to tens of µm per voxel where X-ray attenuation is a qualitative measure of relative electron density, expressed as greyscale number/brightness (usually 16-bit). Phases and individual particles can be digitally segmented according to their greyscale and other characteristics. This represents a distinct advantage over both

  8. A Comparative Study between Moravec and Harris Corner Detection of Noisy Images Using Adaptive Wavelet Thresholding Technique

    OpenAIRE

    Dey, Nilanjan; Nandi, Pradipti; Barman, Nilanjana; Das, Debolina; Chakraborty, Subhabrata

    2012-01-01

    In this paper a comparative study between Moravec and Harris Corner Detection has been done for obtaining features required to track and recognize objects within a noisy image. Corner detection of noisy images is a challenging task in image processing. Natural images often get corrupted by noise during acquisition and transmission. As Corner detection of these noisy images does not provide desired results, hence de-noising is required. Adaptive wavelet thresholding approach is applied for the...

  9. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    Directory of Open Access Journals (Sweden)

    J. L. Guardado

    2014-01-01

    Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  10. Nuclear analytical techniques applied to forensic chemistry; Aplicacion de tecnicas analiticas nucleares en quimica forense

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Veronica; Montoro, Silvia [Universidad Nacional del Litoral, Santa Fe (Argentina). Facultad de Ingenieria Quimica. Dept. de Quimica Analitica; Pratta, Nora; Giandomenico, Angel Di [Consejo Nacional de Investigaciones Cientificas y Tecnicas, Santa Fe (Argentina). Centro Regional de Investigaciones y Desarrollo de Santa Fe

    1999-11-01

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author) 5 refs., 3 figs., 1 tab.; e-mail: csedax e adigian at arcride.edu.ar

  11. Automatic diameter control system applied to the laser heated pedestal growth technique

    Directory of Open Access Journals (Sweden)

    Andreeta M.R.B.

    2003-01-01

    Full Text Available We described an automatic diameter control system (ADC, for the laser heated pedestal growth technique, that reduces the diameter fluctuations in oxide fibers grown from unreacted and non-sinterized pedestals, to less than 2% of the average fiber diameter, and diminishes the average diameter fluctuation, over the entire length of the fiber, to less than 1%. The ADC apparatus is based on an artificial vision system that controls the pulling speed and the height of the molten zone within a precision of 30 mum. We also show that this system can be used for periodic in situ axial doping the fiber. Pure and Cr3+ doped LaAlO3 and pure LiNbO3 were usedas model materials.

  12. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  13. Acoustic emission partial discharge detection technique applied to fault diagnosis: Case studies of generator transformers

    Directory of Open Access Journals (Sweden)

    Shanker Tangella Bhavani

    2016-01-01

    Full Text Available In power transformers, locating the partial discharge (PD source is as important as identifying it. Acoustic Emission (AE sensing offers a good solution for both PD detection and PD source location identification. In this paper the principle of the AE technique, along with in-situ findings of the online acoustic emission signals captured from partial discharges on a number of Generator Transformers (GT, is discussed. Of the two cases discussed, the first deals with Acoustic Emission Partial Discharge (AEPD tests on two identical transformers, and the second deals with the AEPD measurement of a transformer carried out on different occasions (years. These transformers are from a hydropower station and a thermal power station in India. Tests conducted in identical transformers give the provision for comparing AE signal amplitudes from the two transformers. These case studies also help in comprehending the efficacy of integrating Dissolved Gas is (DGA data with AEPD test results in detecting and locating the PD source.

  14. Vibroacoustic Modeling of Mechanically Coupled Structures: Artificial Spring Technique Applied to Light and Heavy Mediums

    Directory of Open Access Journals (Sweden)

    L. Cheng

    1996-01-01

    Full Text Available This article deals with the modeling of vibrating structures immersed in both light and heavy fluids, and possible applications to noise control problems and industrial vessels containing fluids. A theoretical approach, using artificial spring systems to characterize the mechanical coupling between substructures, is extended to include fluid loading. A structure consisting of a plate-ended cylindrical shell and its enclosed acoustic cavity is analyzed. After a brief description of the proposed technique, a number of numerical results are presented. The analysis addresses the following specific issues: the coupling between the plate and the shell; the coupling between the structure and the enclosure; the possibilities and difficulties regarding internal soundproofing through modifications of the joint connections; and the effects of fluid loading on the vibration of the structure.

  15. Emerging and Innovative Techniques for Arsenic Removal Applied to a Small Water Supply System

    Directory of Open Access Journals (Sweden)

    António J. Alçada

    2009-12-01

    Full Text Available The impact of arsenic on human health has led its drinking water MCL to be drastically reduced from 50 to 10 ppb. Consequently, arsenic levels in many water supply sources have become critical. This has resulted in technical and operational impacts on many drinking water treatment plants that have required onerous upgrading to meet the new standard. This becomes a very sensitive issue in the context of water scarcity and climate change, given the expected increasing demand on groundwater sources. This work presents a case study that describes the development of low-cost techniques for efficient arsenic control in drinking water. The results obtained at the Manteigas WTP (Portugal demonstrate the successful implementation of an effective and flexible process of reactive filtration using iron oxide. At real-scale, very high removal efficiencies of over 95% were obtained.

  16. Applying the sterile insect technique to the control of insect pests

    International Nuclear Information System (INIS)

    The sterile insect technique (SIT) is basically a novel twentieth century approach to insect birth control. It is species specific and exploits the mate seeking behaviour of the insect. The basic principle is simple. Insects are mass reared in 'factories' and sexually sterilized by gamma rays from a 60Co source. The sterile insects are then released in a controlled fashion into nature. Matings between the sterile insects released and native insects produced no progeny. If enough of these matings take place, reproduction of the pest population decreases. With continued release, the pest population can be controlled and in some cases eradicated. In the light of the many important applications of the SIT worldwide and the great potential that SIT concepts hold for insect and pest control in developing countries, two special benefits should be stressed. Of greatest significance is the fact that the SIT permits suppression and eradication of insect pests in an environmentally harmless manner. It combines nuclear techniques with genetic approaches and, in effect, replaces intensive use of chemicals in pest control. Although chemicals are used sparingly at the outset in some SIT programmes to reduce the size of the pest population before releases of sterilized insects are started, the total amount of chemicals used in an SIT programme is a mere fraction of what would be used without the SIT. It is also of great importance that the SIT is not designed strictly for the eradication of pest species but can readily be used in the suppression of insect populations. In fact, the SIT is ideally suited for use in conjunction with other agricultural pest control practices such as the use of parasites and predators, attractants and cultural controls (e.g. ploughing under or destruction of crop residues) in integrated pest management programmes to achieve control at the lowest possible price and with a minimum of chemical contamination of the environment

  17. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  18. Contrast cancellation technique applied to digital x-ray imaging using silicon strip detectors

    International Nuclear Information System (INIS)

    Dual-energy mammographic imaging experimental tests have been performed using a compact dichromatic imaging system based on a conventional x-ray tube, a mosaic crystal, and a 384-strip silicon detector equipped with full-custom electronics with single photon counting capability. For simulating mammal tissue, a three-component phantom, made of Plexiglass, polyethylene, and water, has been used. Images have been collected with three different pairs of x-ray energies: 16-32 keV, 18-36 keV, and 20-40 keV. A Monte Carlo simulation of the experiment has also been carried out using the MCNP-4C transport code. The Alvarez-Macovski algorithm has been applied both to experimental and simulated data to remove the contrast between two of the phantom materials so as to enhance the visibility of the third one

  19. Research on Key Techniques for Video Surveillance System Applied to Shipping Channel Management

    Institute of Scientific and Technical Information of China (English)

    WANG Lin; ZHUANG Yan-bin; ZHENG Cheng-zeng

    2007-01-01

    A video patrol and inspection system is an important part of the government's shipping channel information management. This system is mainly applied to video information gathering and processing as a patrol is carried out. The system described in this paper can preview, edit, and add essential explanation messages to the collected video data. It then transfers these data and messages to a video server for the leaders and engineering and technical personnel to retrieve, play, chart, download or print. Each department of the government will use the system's functions according to that department's mission. The system can provide an effective means for managing the shipping enterprise. It also provides a valuable reference for the modernizing of waterborne shipping.

  20. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

    DEFF Research Database (Denmark)

    Nicoletti, Rodrigo; Santos, Ilmar

    2003-01-01

    to a tilting-pad journal bearing, are analysed and discussed. Important conclusions about the application of integral controllers, responsible for changing the rotor-bearing equilibrium position and consequently the "passive" oil film damping coefficients, are achieved. Numerical results show an effective......The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication....... For further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...

  1. Periodic Noise Suppression from ECG Signal using Novel Adaptive Filtering Techniques

    Directory of Open Access Journals (Sweden)

    Yogesh Sharma

    2012-03-01

    Full Text Available Electrocardiogram signal most commonly known recognized and used biomedical signal for medical examination of heart. The ECG signal is very sensitive in nature, and even if small noise mixed with original signal, the various characteristics of the signal changes, Data corrupted with noise must either filtered or discarded, filtering is important issue for design consideration of real time heart monitoring systems. Various filters used for removing the noise from ECG signals, most commonly used filters are Notch Filters, FIR filters, IIR filters, Wiener filter, Adaptive filters etc. Performance analysis shows that the best result is obtained by using Adaptive filter to remove various noises from ECG signal and get significant SNR andMSE results. In this paper a novel adaptive approach by using LMS algorithm and delay has shown whichcan be used for pre-processing of ECG signal and give appreciable result.

  2. Prediction of Quality Features in Iberian Ham by Applying Data Mining on Data From MRI and Computer Vision Techniques

    Directory of Open Access Journals (Sweden)

    Daniel Caballero

    2014-03-01

    Full Text Available This paper aims to predict quality features of Iberian hams by using non-destructive methods of analys is and data mining. Iberian hams were analyzed by Magn etic Resonance Imaging (MRI and Computer Vision Techniques (CVT throughout their ripening process and physico-chemical parameters from them were also measured. The obtained data were used to create an initial database. Deductive techniques ofdata mining (multiple linear regression were used to estimate new data, allowing the insertion of newrecords in the database. Predictive techniques of data mining were applied (multiple linear regression on MRI-CVT data, achieving prediction equations of weight, moisture and lipid content. Finally, data fromprediction equations were compared to data determined by physical-chemical analysis, obtaining high correlation coefficients in most cases. Therefore, data mining, MRI and CVT are suitable tools to esti mate quality traits of Iberian hams. This would improve the control of the ham processing in a non-destruct ive way.

  3. Time-reversal imaging techniques applied to tremor waveforms near Cholame, California to locate tectonic tremor

    Science.gov (United States)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2012-12-01

    Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of

  4. Use of pesticides and experience of applying radioisotope techniques in a developing country

    International Nuclear Information System (INIS)

    An evaluation is made of the use of pesticides by Panamanian farmers in a tropical environment, also covering pesticide residues in plant and animal products, man and soil. In addition, experience with radioisotope techniques is described. Chemical control is common practice among farmers. Each year, 5000 to 6000 t of pesticides are used, especially in horticulture and banana cultivation. Herbicides and insecticides predominate in terms of quantity, and fungicides in terms of frequency of application. Use of the so-called persistent organo-chlorines over the last few decades has led to the presence of residues in plant and animal products in amounts less than 2.2 and 0.1 mg/kg for DDT and lindane, respectively. An average of 11 mg of DDT per kilogram of fat has been detected in the population; about 50% of the persons handling agrochemicals showed direct exposure. Taking into account local practices and tropical conditions, an evaluation is being made of widely used pesticides (maneb, paraquat and 2,4-D) labelled with 14C. The studies have yielded additional information on the behaviour and the residues of these important additives in the environment and in fruits. (author). 3 refs, 1 fig., 5 tabs

  5. Super-ensemble techniques applied to wave forecast: performance and limitations

    Directory of Open Access Journals (Sweden)

    F. Lenartz

    2010-06-01

    Full Text Available Nowadays, several operational ocean wave forecasts are available for a same region. These predictions may considerably differ, and to choose the best one is generally a difficult task. The super-ensemble approach, which consists in merging different forecasts and past observations into a single multi-model prediction system, is evaluated in this study. During the DART06 campaigns organized by the NATO Undersea Research Centre, four wave forecasting systems were simultaneously run in the Adriatic Sea, and significant wave height was measured at six stations as well as along the tracks of two remote sensors. This effort provided the necessary data set to compare the skills of various multi-model combination techniques. Our results indicate that a super-ensemble based on the Kalman Filter improves the forecast skills: The bias during both the hindcast and forecast periods is reduced, and the correlation coefficient is similar to that of the best individual model. The spatial extrapolation of local results is not straightforward and requires further investigation to be properly implemented.

  6. Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data

    CERN Document Server

    Wolz, L; Abdalla, F B; Anderson, C M; Chang, T -C; Li, Y -C; Masui, K W; Switzer, E; Pen, U -L; Voytek, T C; Yadav, J

    2015-01-01

    We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15hr and 1hr field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013) covering about 41 square degrees at 0.6 < z < 1.0 which overlaps with the WiggleZ galaxy survey employed for the cross-correlation with the maps. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contaminations using an independent component analysis technique (fastica) and develop a description for a Fourier-based optimal weighting estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission by using the non-Gaussian nature of their probability functions. The power spectra of the intensity maps and the cross-correlation...

  7. Applying satellite remote sensing technique in disastrous rainfall systems around Taiwan

    Science.gov (United States)

    Liu, Gin-Rong; Chen, Kwan-Ru; Kuo, Tsung-Hua; Liu, Chian-Yi; Lin, Tang-Huang; Chen, Liang-De

    2016-05-01

    Many people in Asia regions have been suffering from disastrous rainfalls year by year. The rainfall from typhoons or tropical cyclones (TCs) is one of their key water supply sources, but from another perspective such TCs may also bring forth unexpected heavy rainfall, thereby causing flash floods, mudslides or other disasters. So far we cannot stop or change a TC route or intensity via present techniques. Instead, however we could significantly mitigate the possible heavy casualties and economic losses if we can earlier know a TC's formation and can estimate its rainfall amount and distribution more accurate before its landfalling. In light of these problems, this short article presents methods to detect a TC's formation as earlier and to delineate its rainfall potential pattern more accurate in advance. For this first part, the satellite-retrieved air-sea parameters are obtained and used to estimate the thermal and dynamic energy fields and variation over open oceans to delineate the high-possibility typhoon occurring ocean areas and cloud clusters. For the second part, an improved tropical rainfall potential (TRaP) model is proposed with better assumptions then the original TRaP for TC rainfall band rotations, rainfall amount estimation, and topographic effect correction, to obtain more accurate TC rainfall distributions, especially for hilly and mountainous areas, such as Taiwan.

  8. Comparison of motion correction techniques applied to functional near-infrared spectroscopy data from children

    Science.gov (United States)

    Hu, Xiao-Su; Arredondo, Maria M.; Gomba, Megan; Confer, Nicole; DaSilva, Alexandre F.; Johnson, Timothy D.; Shalinsky, Mark; Kovelman, Ioulia

    2015-12-01

    Motion artifacts are the most significant sources of noise in the context of pediatric brain imaging designs and data analyses, especially in applications of functional near-infrared spectroscopy (fNIRS), in which it can completely affect the quality of the data acquired. Different methods have been developed to correct motion artifacts in fNIRS data, but the relative effectiveness of these methods for data from child and infant subjects (which is often found to be significantly noisier than adult data) remains largely unexplored. The issue is further complicated by the heterogeneity of fNIRS data artifacts. We compared the efficacy of the six most prevalent motion artifact correction techniques with fNIRS data acquired from children participating in a language acquisition task, including wavelet, spline interpolation, principal component analysis, moving average (MA), correlation-based signal improvement, and combination of wavelet and MA. The evaluation of five predefined metrics suggests that the MA and wavelet methods yield the best outcomes. These findings elucidate the varied nature of fNIRS data artifacts and the efficacy of artifact correction methods with pediatric populations, as well as help inform both the theory and practice of optical brain imaging analysis.

  9. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    Science.gov (United States)

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  10. Applying stereotactic injection technique to study genetic effects on animal behaviors.

    Science.gov (United States)

    McSweeney, Colleen; Mao, Yingwei

    2015-05-10

    Stereotactic injection is a useful technique to deliver high titer lentiviruses to targeted brain areas in mice. Lentiviruses can either overexpress or knockdown gene expression in a relatively focused region without significant damage to the brain tissue. After recovery, the injected mouse can be tested on various behavioral tasks such as the Open Field Test (OFT) and the Forced Swim Test (FST). The OFT is designed to assess locomotion and the anxious phenotype in mice by measuring the amount of time that a mouse spends in the center of a novel open field. A more anxious mouse will spend significantly less time in the center of the novel field compared to controls. The FST assesses the anti-depressive phenotype by quantifying the amount of time that mice spend immobile when placed into a bucket of water. A mouse with an anti-depressive phenotype will spend significantly less time immobile compared to control animals. The goal of this protocol is to use the stereotactic injection of a lentivirus in conjunction with behavioral tests to assess how genetic factors modulate animal behaviors.

  11. Dosimetric properties of bio minerals applied to high-dose dosimetry using the TSEE technique

    Energy Technology Data Exchange (ETDEWEB)

    Vila, G. B.; Caldas, L. V. E., E-mail: gbvila@ipen.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    The study of the dosimetric properties such as reproducibility, the residual signal, lower detection dose, dose-response curve and fading of the thermally stimulated emission exo electronic (TSEE) signal of Brazilian bio minerals has shown that these materials present a potential use as radiation dosimeters. The reproducibility within ± 10% for oyster shell, mother-of-pearl and coral reef samples showed that the signal dispersion is small when compared with the mean value of the measurements. The study showed that the residual signal can be eliminated with a thermal treatment at 300 grades C/1 h. The lower detection dose of 9.8 Gy determined for the oyster shell samples when exposed to beta radiation and 1.6 Gy for oyster shell and mother-of-pearl samples when exposed to gamma radiation can be considered good, taking into account the high doses of this study. The materials presented linearity at the dose response curves in some ranges, but the lack of linearity in other cases presents no problem since a good mathematical description is possible. The fading study showed that the loss of TSEE signal can be minimized if the samples are protected from interferences such as light, heat and humidity. Taking into account the useful linearity range as the main dosimetric characteristic, the tiger shell and oyster shell samples are the most suitable for high-dose dosimetry using the TSEE technique. (Author)

  12. COMPARATIVE PERFORMANCE MONITORING OF RAINFED WATERSHEDS APPLYING GIS AND RS TECHNIQUES

    Directory of Open Access Journals (Sweden)

    ARUN W. DHAWALE

    2012-03-01

    Full Text Available Under the watershed development project of the Ministry of Rural Development, many micro watersheds have been identified for development and management. However Government is handicapped inobtaining data on the performance of these programmes due to the absence of watershed performance studies. Rainfed agriculture is clearly critical to agricultural performance in India. Nonetheless, it is difficult to precisely quantify the overall importance of the sector. The widely quoted statistic is that 70% of cultivated area israinfed, implying that rainfed agriculture is more important than irrigated agriculture. In the present study two rainfed micro-watersheds namely Kolvan valley and Darewadi is taken as case study for performance monitoring using GIS and RS Techniques. An attempt has been made to highlight the role of GIS and RS in estimation of runoff from both the watersheds by SCS curve number method. The methodology developed for the research show that the knowledge extracted from proposed approach can remove the problem of performance monitoring of micro watersheds to great extent. Comparative performance of both micro watersheds having extreme rainfall conditions shows that in Darewadi micro watershed overall success rate is more than Kolvan valley.

  13. Hyperspectral imaging techniques applied to the monitoring of wine waste anaerobic digestion process

    Science.gov (United States)

    Serranti, Silvia; Fabbri, Andrea; Bonifazi, Giuseppe

    2012-11-01

    An anaerobic digestion process, finalized to biogas production, is characterized by different steps involving the variation of some chemical and physical parameters related to the presence of specific biomasses as: pH, chemical oxygen demand (COD), volatile solids, nitrate (NO3-) and phosphate (PO3-). A correct process characterization requires a periodical sampling of the organic mixture in the reactor and a further analysis of the samples by traditional chemical-physical methods. Such an approach is discontinuous, time-consuming and expensive. A new analytical approach based on hyperspectral imaging in the NIR field (1000 to 1700 nm) is investigated and critically evaluated, with reference to the monitoring of wine waste anaerobic digestion process. The application of the proposed technique was addressed to identify and demonstrate the correlation existing, in terms of quality and reliability of the results, between "classical" chemical-physical parameters and spectral features of the digestate samples. Good results were obtained, ranging from a R2=0.68 and a RMSECV=12.83 mg/l for nitrate to a R2=0.90 and a RMSECV=5495.16 mg O2/l for COD. The proposed approach seems very useful in setting up innovative control strategies allowing for full, continuous control of the anaerobic digestion process.

  14. Experimental studies of active and passive flow control techniques applied in a twin air-intake.

    Science.gov (United States)

    Paul, Akshoy Ranjan; Joshi, Shrey; Jindal, Aman; Maurya, Shivam P; Jain, Anuj

    2013-01-01

    The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ) and a vane-type passive vortex generator (VG) and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel) and counterrotating (V-shape) are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG.

  15. Correlation Techniques as Applied to Pose Estimation in Space Station Docking

    Science.gov (United States)

    Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-01-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.

  16. Experiences in applying optimization techniques to configurations for the Control of Flexible Structures (COFS) program

    Science.gov (United States)

    Walsh, Joanne L.

    1989-01-01

    Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used.

  17. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  18. Morphological analysis of the flippers in the Franciscana dolphin, Pontoporia blainvillei, applying X-ray technique.

    Science.gov (United States)

    Del Castillo, Daniela Laura; Panebianco, María Victoria; Negri, María Fernanda; Cappozzo, Humberto Luis

    2014-07-01

    Pectoral flippers of cetaceans function to provide stability and maneuverability during locomotion. Directional asymmetry (DA) is a common feature among odontocete cetaceans, as well as sexual dimorphism (SD). For the first time DA, allometry, physical maturity, and SD of the flipper skeleton--by X-ray technique--of Pontoporia blainvillei were analyzed. The number of carpals, metacarpals, phalanges, and morphometric characters from the humerus, radius, ulna, and digit two were studied in franciscana dolphins from Buenos Aires, Argentina. The number of visible epiphyses and their degree of fusion at the proximal and distal ends of the humerus, radius, and ulna were also analyzed. The flipper skeleton was symmetrical, showing a negative allometric trend, with similar growth patterns in both sexes with the exception of the width of the radius (P ≤ 0.01). SD was found on the number of phalanges of digit two (P ≤ 0.01), ulna and digit two lengths. Females showed a higher relative ulna length and shorter relative digit two length, and the opposite occurred in males (P ≤ 0.01). Epiphyseal fusion pattern proved to be a tool to determine dolphin's age; franciscana dolphins with a mature flipper were, at least, four years old. This study indicates that the flippers of franciscana dolphins are symmetrical; both sexes show a negative allometric trend; SD is observed in radius, ulna, and digit two; and flipper skeleton allows determine the age class of the dolphins. PMID:24700648

  19. Applying advanced imaging techniques to a murine model of orthotopic osteosarcoma

    Directory of Open Access Journals (Sweden)

    Matthew Lawrence Broadhead

    2015-08-01

    Full Text Available IntroductionReliable animal models are required to evaluate novel treatments for osteosarcoma. In this study, the aim was to implement advanced imaging techniques in a murine model of orthotopic osteosarcoma to improve disease modeling and the assessment of primary and metastatic disease.Materials and methodsIntra-tibial injection of luciferase-tagged OPGR80 murine osteosarcoma cells was performed in Balb/c nude mice. Treatment agent (pigment epithelium-derived factor; PEDF was delivered to the peritoneal cavity. Primary tumors and metastases were evaluated by in vivo bioluminescent assays, micro-computed tomography, [18F]-Fluoride-PET and [18F]-FDG-PET. Results[18F]-Fluoride-PET was more sensitive than [18F]-FDG-PET for detecting early disease. Both [18F]-Fluoride-PET and [18F]-FDG-PET showed progressive disease in the model, with 4-fold and 2-fold increases in SUV (p<0.05 by the study endpoint, respectively. In vivo bioluminescent assay showed that systemically delivered PEDF inhibited growth of primary osteosarcoma.DiscussionApplication of [18F]-Fluoride-PET and [18F]-FDG-PET to an established murine model of orthotopic osteosarcoma has improved the assessment of disease. The use of targeted imaging should prove beneficial for the evaluation of new approaches to osteosarcoma therapy.

  20. Laser granulometry: A comparative study the techniques of sieving and elutriation applied to pozzoianic materials

    Directory of Open Access Journals (Sweden)

    Frías, M.

    1990-03-01

    Full Text Available Laser granulometry is a rapid method for determination of particle size distribution in both dry and wet phases. The present paper, diffraction technique by laser beams is an application to the granulometric studies of pozzolanic materials in suspension. Theses granulometric analysis are compared to those obtained with the Alpine pneumatic-siever and Bahco elutriator-centrifuge.

    La granulometria laser es un método rápido para determinar distribuciones de tamaños de partícula tanto en vía seca como en húmeda. En este trabajo la técnica de difracción por rayos laser se aplica al estudio granulométrico de materiales puzolánicos en suspensión. Estos análisis granulométricos se cotejan con los obtenidos con la técnica tamizador-neumático Alpine y elutriador-centrifugador Bahco.

  1. Blade Displacement Measurement Technique Applied to a Full-Scale Rotor Test

    Science.gov (United States)

    Abrego, Anita I.; Olson, Lawrence E.; Romander, Ethan A.; Barrows, Danny A.; Burner, Alpheus W.

    2012-01-01

    Blade displacement measurements using multi-camera photogrammetry were acquired during the full-scale wind tunnel test of the UH-60A Airloads rotor, conducted in the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel. The objectives were to measure the blade displacement and deformation of the four rotor blades as they rotated through the entire rotor azimuth. These measurements are expected to provide a unique dataset to aid in the development and validation of rotorcraft prediction techniques. They are used to resolve the blade shape and position, including pitch, flap, lag and elastic deformation. Photogrammetric data encompass advance ratios from 0.15 to slowed rotor simulations of 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. An overview of the blade displacement measurement methodology and system development, descriptions of image processing, uncertainty considerations, preliminary results covering static and moderate advance ratio test conditions and future considerations are presented. Comparisons of experimental and computational results for a moderate advance ratio forward flight condition show good trend agreements, but also indicate significant mean discrepancies in lag and elastic twist. Blade displacement pitch measurements agree well with both the wind tunnel commanded and measured values.

  2. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    Directory of Open Access Journals (Sweden)

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  3. Modern Chemistry Techniques Applied to Metal Behavior and Chelation in Medical and Environmental Systems ? Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, M; Andresen, B; Burastero, S R; Chiarappa-Zucca, M L; Chinn, S C; Coronado, P R; Gash, A E; Perkins, J; Sawvel, A M; Szechenyi, S C

    2005-02-03

    This report details the research and findings generated over the course of a 3-year research project funded by Lawrence Livermore National Laboratory (LLNL) Laboratory Directed Research and Development (LDRD). Originally tasked with studying beryllium chemistry and chelation for the treatment of Chronic Beryllium Disease and environmental remediation of beryllium-contaminated environments, this work has yielded results in beryllium and uranium solubility and speciation associated with toxicology; specific and effective chelation agents for beryllium, capable of lowering beryllium tissue burden and increasing urinary excretion in mice, and dissolution of beryllium contamination at LLNL Site 300; {sup 9}Be NMR studies previously unstudied at LLNL; secondary ionization mass spec (SIMS) imaging of beryllium in spleen and lung tissue; beryllium interactions with aerogel/GAC material for environmental cleanup. The results show that chelator development using modern chemical techniques such as chemical thermodynamic modeling, was successful in identifying and utilizing tried and tested beryllium chelators for use in medical and environmental scenarios. Additionally, a study of uranium speciation in simulated biological fluids identified uranium species present in urine, gastric juice, pancreatic fluid, airway surface fluid, simulated lung fluid, bile, saliva, plasma, interstitial fluid and intracellular fluid.

  4. Linear and non-linear control techniques applied to actively lubricated journal bearings

    Science.gov (United States)

    Nicoletti, R.; Santos, I. F.

    2003-03-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can use the conventional hydrodynamic lubrication. For further reduction of shaft vibrations one can use the active lubrication action, which is based on injecting pressurized oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and a non-linear controller, applied to a tilting-pad journal bearing, are analysed and discussed. Important conclusions about the application of integral controllers, responsible for changing the rotor-bearing equilibrium position and consequently the "passive" oil film damping coefficients, are achieved. Numerical results show an effective vibration reduction of unbalance response of a rigid rotor, where the PD and the non-linear P controllers show better performance for the frequency range of study (0-80 Hz). The feasibility of eliminating rotor-bearing instabilities (phenomena of whirl) by using active lubrication is also investigated, illustrating clearly one of its most promising applications.

  5. A non-intrusive measurement technique applying CARS for concentration measurement in a gas mixing flow

    CERN Document Server

    Yamamoto, Ken; Moriya, Madoka; Kuriyama, Reiko; Sato, Yohei

    2015-01-01

    Coherent anti-Stokes Raman scattering (CARS) microscope system was built and applied to a non-intrusive gas concentration measurement of a mixing flow in a millimeter-scale channel. Carbon dioxide and nitrogen were chosen as test fluids and CARS signals from the fluids were generated by adjusting the wavelengths of the Pump and the Stokes beams. The generated CARS signals, whose wavelengths are different from those of the Pump and the Stokes beams, were captured by an EM-CCD camera after filtering out the excitation beams. A calibration experiment was performed in order to confirm the applicability of the built-up CARS system by measuring the intensity of the CARS signal from known concentrations of the samples. After confirming that the measured CARS intensity was proportional to the second power of the concentrations as was theoretically predicted, the CARS intensities in the gas mixing flow channel were measured. Ten different measurement points were set and concentrations of both carbon dioxide and nitrog...

  6. Application of adaptive neuro-fuzzy inference system techniques and artificial neural networks to predict solid oxide fuel cell performance in residential microgeneration installation

    Energy Technology Data Exchange (ETDEWEB)

    Entchev, Evgueniy; Yang, Libing [Integrated Energy Systems Laboratory, CANMET Energy Technology Centre, 1 Haanel Dr., Ottawa, Ontario (Canada)

    2007-06-30

    This study applies adaptive neuro-fuzzy inference system (ANFIS) techniques and artificial neural network (ANN) to predict solid oxide fuel cell (SOFC) performance while supplying both heat and power to a residence. A microgeneration 5 kW{sub el} SOFC system was installed at the Canadian Centre for Housing Technology (CCHT), integrated with existing mechanical systems and connected in parallel to the grid. SOFC performance data were collected during the winter heating season and used for training of both ANN and ANFIS models. The ANN model was built on back propagation algorithm as for ANFIS model a combination of least squares method and back propagation gradient decent method were developed and applied. Both models were trained with experimental data and used to predict selective SOFC performance parameters such as fuel cell stack current, stack voltage, etc. The study revealed that both ANN and ANFIS models' predictions agreed well with variety of experimental data sets representing steady-state, start-up and shut-down operations of the SOFC system. The initial data set was subjected to detailed sensitivity analysis and statistically insignificant parameters were excluded from the training set. As a result, significant reduction of computational time was achieved without affecting models' accuracy. The study showed that adaptive models can be applied with confidence during the design process and for performance optimization of existing and newly developed solid oxide fuel cell systems. It demonstrated that by using ANN and ANFIS techniques SOFC microgeneration system's performance could be modelled with minimum time demand and with a high degree of accuracy. (author)

  7. Adaptive resource allocation technique to stochastic multimodal projects : a distributed platform implementation in JAVA

    OpenAIRE

    Tereso, Anabela Pereira; Mota, João; Lameiro, Rui

    2005-01-01

    This paper presents the implementation of the dynamic programming model (introduced in a previous paper) for the resolution of the adaptive resource allocation problem in stochastic multimodal project networks. A distributed platform using an Object Oriented language, Java, is used in order to take advantage of the available computational resources.

  8. Adapting developing country epidemiological assessment techniques to improve the quality of health needs assessments in developed countries

    Directory of Open Access Journals (Sweden)

    Handy Deirdre

    2005-04-01

    Full Text Available Abstract Background We were commissioned to carry out three health assessments in urban areas of Dublin in Ireland. We required an epidemiologically robust method that could collect data rapidly and inexpensively. We were dealing with inadequate health information systems, weak planning data and a history of inadequate recipient involvement in health service planning. These problems had also been identified by researchers carrying out health assessments in developing countries. This paper reports our experience of adapting a cluster survey model originally developed by international organisations to assess community health needs and service coverage in developing countries and applying our adapted model to three urban areas in Dublin, Ireland Methods We adapted the model to control for socio-economic heterogeneity, to take account of the inadequate population list, to ensure a representative sample and to account for a higher prevalence of degenerative and chronic diseases. We employed formal as well as informal communication methods and adjusted data collection times to maximise participation. Results The model we adapted had the capacity to ascertain both health needs and health care delivery needs. The community participated throughout the process and members were trained and employed as data collectors. The assessments have been used by local health boards and non-governmental agencies to plan and deliver better or additional services. Conclusion We were able to carry out high quality health needs assessments in urban areas by adapting and applying a developing country health assessment method. Issues arose relating to health needs assessment as part of the planning cycle and the role of participants in the process.

  9. Techniques that Link Extreme Events to the Large Scale, Applied to California Heat Waves

    Science.gov (United States)

    Grotjahn, R.

    2015-12-01

    Understanding the mechanisms how Californian Central Valley (CCV) summer extreme hot spells develop is very important since the events have major impacts on the economy and human safety. Results from a series of CCV heat wave studies will be presented, emphasizing the techniques used. Key larger scale elements are identified statistically that are also consistent with synoptic and dynamic understanding of what must be present during extreme heat. Beyond providing a clear synoptic explanation, these key elements have high predictability, in part because soil moisture has little annual variation in the heavily-irrigated CCV. In turn, the predictability naturally leads to an effective tool to assess climate model simulation of these heat waves in historical and future climate scenarios. (Does the model develop extreme heat for the correct reasons?) Further work identified that these large scale elements arise in two quite different ways: one from expansion southwestward of a pre-existing heat wave in southwest Canada, the other formed in place from parcels traversing the North Pacific. The pre-existing heat wave explains an early result showing correlation between heat waves in Sacramento California, and other locations along the US west coast, including distant Seattle Washington. CCV heat waves can be preceded by unusually strong tropical Indian Ocean and Indonesian convection, this partial link may occur through an Asian subtropical jet wave guide. Another link revealed by diagnostics is a middle and higher latitude source of wave activity in Siberia and East Asia that also leads to the development of the CCV heat wave. This talk will address as many of these results and the tools used to obtain them as is reasonable within the available time.

  10. Applying morphologic techniques to evaluate hotdogs: what is in the hotdogs we eat?

    Science.gov (United States)

    Prayson, Brigid E; McMahon, James T; Prayson, Richard A

    2008-04-01

    Americans consume billions of hotdogs per year resulting in more than a billion dollars in retail sales. Package labels typically list some type of meat as the primary ingredient. The purpose of this study is to assess the meat and water content of several hotdog brands to determine if the package labels are accurate. Eight brands of hotdogs were evaluated for water content by weight. A variety of routine techniques in surgical pathology including routine light microscopy with hematoxylin-eosin-stained sections, special staining, immunohistochemistry, and electron microscopy were used to assess for meat content and for other recognizable components. Package labels indicated that the top-listed ingredient in all 8 brands was meat; the second listed ingredient was water (n = 6) and another type of meat (n = 2). Water comprised 44% to 69% (median, 57%) of the total weight. Meat content determined by microscopic cross-section analysis ranged from 2.9% to 21.2% (median, 5.7%). The cost per hotdog ($0.12-$0.42) roughly correlated with meat content. A variety of tissues were observed besides skeletal muscle including bone (n = 8), collagen (n = 8), blood vessels (n = 8), plant material (n = 8), peripheral nerve (n = 7), adipose (n = 5), cartilage (n = 4), and skin (n = 1). Glial fibrillary acidic protein immunostaining was not observed in any of the hotdogs. Lipid content on oil red O staining was graded as moderate in 3 hotdogs and marked in 5 hotdogs. Electron microscopy showed recognizable skeletal muscle with evidence of degenerative changes. In conclusion, hotdog ingredient labels are misleading; most brands are more than 50% water by weight. The amount of meat (skeletal muscle) in most brands comprised less than 10% of the cross-sectional surface area. More expensive brands generally had more meat. All hotdogs contained other tissue types (bone and cartilage) not related to skeletal muscle; brain tissue was not present. PMID:18325469

  11. BiasMDP: Carrier lifetime characterization technique with applied bias voltage

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Paul M., E-mail: paul.jordan@namlab.com; Simon, Daniel K.; Dirnstorfer, Ingo [Nanoelectronic Materials Laboratory gGmbH (NaMLab), Nöthnitzer Straße 64, 01187 Dresden (Germany); Mikolajick, Thomas [Nanoelectronic Materials Laboratory gGmbH (NaMLab), Nöthnitzer Straße 64, 01187 Dresden (Germany); Technische Universität Dresden, Institut für Halbleiter- und Mikrosystemtechnik, 01062 Dresden (Germany)

    2015-02-09

    A characterization method is presented, which determines fixed charge and interface defect densities in passivation layers. This method bases on a bias voltage applied to an electrode on top of the passivation layer. During a voltage sweep, the effective carrier lifetime is measured by means of microwave detected photoconductivity. When the external voltage compensates the electric field of the fixed charges, the lifetime drops to a minimum value. This minimum value correlates to the flat band voltage determined in reference impedance measurements. This correlation is measured on p-type silicon passivated by Al{sub 2}O{sub 3} and Al{sub 2}O{sub 3}/HfO{sub 2} stacks with different fixed charge densities and layer thicknesses. Negative fixed charges with densities of 3.8 × 10{sup 12 }cm{sup −2} and 0.7 × 10{sup 12 }cm{sup −2} are determined for Al{sub 2}O{sub 3} layers without and with an ultra-thin HfO{sub 2} interface, respectively. The voltage and illumination dependencies of the effective carrier lifetime are simulated with Shockley Read Hall surface recombination at continuous defects with parabolic capture cross section distributions for electrons and holes. The best match with the measured data is achieved with a very low interface defect density of 1 × 10{sup 10 }eV{sup −1} cm{sup −2} for the Al{sub 2}O{sub 3} sample with HfO{sub 2} interface.

  12. APPLIED PHYTO-REMEDIATION TECHNIQUES USING HALOPHYTES FOR OIL AND BRINE SPILL SCARS

    Energy Technology Data Exchange (ETDEWEB)

    M.L. Korphage; Bruce G. Langhus; Scott Campbell

    2003-03-01

    Produced salt water from historical oil and gas production was often managed with inadequate care and unfortunate consequences. In Kansas, the production practices in the 1930's and 1940's--before statewide anti-pollution laws--were such that fluids were often produced to surface impoundments where the oil would segregate from the salt water. The oil was pumped off the pits and the salt water was able to infiltrate into the subsurface soil zones and underlying bedrock. Over the years, oil producing practices were changed so that segregation of fluids was accomplished in steel tanks and salt water was isolated from the natural environment. But before that could happen, significant areas of the state were scarred by salt water. These areas are now in need of economical remediation. Remediation of salt scarred land can be facilitated with soil amendments, land management, and selection of appropriate salt tolerant plants. Current research on the salt scars around the old Leon Waterflood, in Butler County, Kansas show the relative efficiency of remediation options. Based upon these research findings, it is possible to recommend cost efficient remediation techniques for slight, medium, and heavy salt water damaged soil. Slight salt damage includes soils with Electrical Conductivity (EC) values of 4.0 mS/cm or less. Operators can treat these soils with sufficient amounts of gypsum, install irrigation systems, and till the soil. Appropriate plants can be introduced via transplants or seeded. Medium salt damage includes soils with EC values between 4.0 and 16 mS/cm. Operators will add amendments of gypsum, till the soil, and arrange for irrigation. Some particularly salt tolerant plants can be added but most planting ought to be reserved until the second season of remediation. Severe salt damage includes soil with EC values in excess of 16 mS/cm. Operators will add at least part of the gypsum required, till the soil, and arrange for irrigation. The following

  13. Lipase immobilized by different techniques on various support materials applied in oil hydrolysis

    Directory of Open Access Journals (Sweden)

    VILMA MINOVSKA

    2005-04-01

    Full Text Available Batch hydrolysis of olive oil was performed by Candida rugosa lipase immobilized on Amberlite IRC-50 and Al2O3. These two supports were selected out of 16 carriers: inorganic materials (sand, silica gel, infusorial earth, Al2O3, inorganic salts (CaCO3, CaSO4, ion-exchange resins (Amberlite IRC-50 and IR-4B, Dowex 2X8, a natural resin (colophony, a natural biopolymer (sodium alginate, synthetic polymers (polypropylene, polyethylene and zeolites. Lipase immobilization was carried out by simple adsorption, adsorption followed by cross-linking, adsorption on ion-exchange resins, combined adsorption and precipitation, pure precipitation and gel entrapment. The suitability of the supports and techniques for the immobilization of lipase was evaluated by estimating the enzyme activity, protein loading, immobilization efficiency and reusability of the immobilizates. Most of the immobilizates exhibited either a low enzyme activity or difficulties during the hydrolytic reaction. Only those prepared by ionic adsorption on Amberlite IRC-50 and by combined adsorption and precipitation on Al2O3 showed better activity, 2000 and 430 U/g support, respectively, and demonstrated satisfactory behavior when used repeatedly. The hydrolysis was studied as a function of several parameters: surfactant concentration, enzyme concentration, pH and temperature. The immobilized preparation with Amberlite IRC-50 was stable and active in the whole range of pH (4 to 9 and temperature (20 to 50 °C, demonstrating a 99% degree of hydrolysis. In repeated usage, it was stable and active having a half-life of 16 batches, which corresponds to an operation time of 384 h. Its storage stability was remarkable too, since after 9 months it had lost only 25 % of the initial activity. The immobilizate with Al22O3 was less stable and less active. At optimal environmental conditions, the degree of hydrolysis did not exceed 79 %. In repeated usage, after the fourth batch, the degree of

  14. Statistical Mechanics Ideas and Techniques Applied to Selected Problems in Ecology

    Directory of Open Access Journals (Sweden)

    Hugo Fort

    2013-11-01

    Full Text Available Ecosystem dynamics provides an interesting arena for the application of a plethora concepts and techniques from statistical mechanics. Here I review three examples corresponding each one to an important problem in ecology. First, I start with an analytical derivation of clumpy patterns for species relative abundances (SRA empirically observed in several ecological communities involving a high number n of species, a phenomenon which have puzzled ecologists for decades. An interesting point is that this derivation uses results obtained from a statistical mechanics model for ferromagnets. Second, going beyond the mean field approximation, I study the spatial version of a popular ecological model involving just one species representing vegetation. The goal is to address the phenomena of catastrophic shifts—gradual cumulative variations in some control parameter that suddenly lead to an abrupt change in the system—illustrating it by means of the process of desertification of arid lands. The focus is on the aggregation processes and the effects of diffusion that combined lead to the formation of non trivial spatial vegetation patterns. It is shown that different quantities—like the variance, the two-point correlation function and the patchiness—may serve as early warnings for the desertification of arid lands. Remarkably, in the onset of a desertification transition the distribution of vegetation patches exhibits scale invariance typical of many physical systems in the vicinity a phase transition. I comment on similarities of and differences between these catastrophic shifts and paradigmatic thermodynamic phase transitions like the liquid-vapor change of state for a fluid. Third, I analyze the case of many species interacting in space. I choose tropical forests, which are mega-diverse ecosystems that exhibit remarkable dynamics. Therefore these ecosystems represent a research paradigm both for studies of complex systems dynamics as well as to

  15. Using an electrohydraulic ankle foot orthosis to study modifications in feedforward control during locomotor adaptation to force fields applied in stance

    Directory of Open Access Journals (Sweden)

    Bouyer Laurent J

    2009-06-01

    Full Text Available Abstract Background Adapting to external forces during walking has been proposed as a tool to improve locomotion after central nervous system injury. However, sensorimotor integration during walking varies according to the timing in the gait cycle, suggesting that adaptation may also depend on gait phases. In this study, an ElectroHydraulic AFO (EHO was used to apply forces specifically during mid-stance and push-off to evaluate if feedforward movement control can be adapted in these 2 gait phases. Methods Eleven healthy subjects walked on a treadmill before (3 min, during (5 min and after (5 min exposure to 2 force fields applied by the EHO (mid-stance/push-off; ~10 Nm, towards dorsiflexion. To evaluate modifications in feedforward control, strides with no force field ('catch strides' were unexpectedly inserted during the force field walking period. Results When initially exposed to a mid-stance force field (FF20%, subjects showed a significant increase in ankle dorsiflexion velocity. Catches applied early into the FF20% were similar to baseline (P > 0.99. Subjects gradually adapted by returning ankle velocity to baseline over ~50 strides. Catches applied thereafter showed decreased ankle velocity where the force field was normally applied, indicating the presence of feedforward adaptation. When initially exposed to a push-off force field (FF50%, plantarflexion velocity was reduced in the zone of force field application. No adaptation occurred over the 5 min exposure. Catch strides kinematics remained similar to control at all times, suggesting no feedforward adaptation. As a control, force fields assisting plantarflexion (-3.5 to -9.5 Nm were applied and increased ankle plantarflexion during push-off, confirming that the lack of kinematic changes during FF50% catch strides were not simply due to a large ankle impedance. Conclusion Together these results show that ankle exoskeletons such as the EHO can be used to study phase-specific adaptive

  16. An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy

    Science.gov (United States)

    Collis, Peter

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…

  17. An Optimized Technique of Increasing the Performance of Network Adapter on EML Layer

    Directory of Open Access Journals (Sweden)

    Prashanth L

    2012-08-01

    Full Text Available Simple Network Adapter initially which acts as an interface between the Transaction server and Network Elements communicates over the channel through tcppdu. Presently the disadvantage being involved in tcppdu is to maintain the channel contention, reservation of channel bandwidth. The disadvantage being involved is certain features, version of network elements communicates by receiving the xml over the socket. So, it’s not possible to change the entire framework, but by updating the framework an XML Over Socket(XOS formation should be supported. The XOS implementation is being performed using Java language through mainly in JVM. Such that by this deployment machines would become easier and form a good communication gap between them. This simple network adapter being developed should support operations of the North bounded server and gives an established authorized, secured, reliable portal. The interface being developed should provide a good performance in meeting the network demands and operated conversions of respective objects

  18. ADAPTIVE RECONSTRUCTION TECHNIQUE FOR THE LOST INFORMATION OF THE RECTANGULAR IMAGE AREA

    Institute of Scientific and Technical Information of China (English)

    Shi Rong; Li Xiaofeng; Li Zaiming

    2004-01-01

    The adaptive reconstruction for the lost information of the rectangular image area is very important for the robust transmission and restoration of the image. In this paper, a new reconstruction method based on the Discrete Cosine Transform (DCT) domain has been put forward. According to the low pass character of the human visual system and the energy distribution of the DCT coefficients on the rectangular boundary, the DCT coefficients of the rectangular image area are adaptively selected and recovered. After the Inverse Discrete Cosine Transform (IDCT), the lost information of the rectangular image area can be reconstructed. The experiments have demonstrated that the subjective and objective qualities of the reconstructed images are enhanced greatly than before.

  19. Fielding the magnetically applied pressure-shear technique on the Z accelerator (completion report for MRT 4519).

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, C. Scott; Haill, Thomas A.; Dalton, Devon Gardner; Rovang, Dean Curtis; Lamppa, Derek C.

    2013-09-01

    The recently developed Magnetically Applied Pressure-Shear (MAPS) experimental technique to measure material shear strength at high pressures on magneto-hydrodynamic (MHD) drive pulsed power platforms was fielded on August 16, 2013 on shot Z2544 utilizing hardware set A0283A. Several technical and engineering challenges were overcome in the process leading to the attempt to measure the dynamic strength of NNSA Ta at 50 GPa. The MAPS technique relies on the ability to apply an external magnetic field properly aligned and time correlated with the MHD pulse. The load design had to be modified to accommodate the external field coils and additional support was required to manage stresses from the pulsed magnets. Further, this represents the first time transverse velocity interferometry has been applied to diagnose a shot at Z. All subsystems performed well with only minor issues related to the new feed design which can be easily addressed by modifying the current pulse shape. Despite the success of each new component, the experiment failed to measure strength in the samples due to spallation failure, most likely in the diamond anvils. To address this issue, hydrocode simulations are being used to evaluate a modified design using LiF windows to minimize tension in the diamond and prevent spall. Another option to eliminate the diamond material from the experiment is also being investigated.

  20. Feature-Based Adaptive Tolerance Tree (FATT): An Efficient Indexing Technique for Content-Based Image Retrieval Using Wavelet Transform

    CERN Document Server

    AnandhaKumar, Dr P

    2010-01-01

    This paper introduces a novel indexing and access method, called Feature- Based Adaptive Tolerance Tree (FATT), using wavelet transform is proposed to organize large image data sets efficiently and to support popular image access mechanisms like Content Based Image Retrieval (CBIR).Conventional database systems are designed for managing textual and numerical data and retrieving such data is often based on simple comparisons of text or numerical values. However, this method is no longer adequate for images, since the digital presentation of images does not convey the reality of images. Retrieval of images become difficult when the database is very large. This paper addresses such problems and presents a novel indexing technique, Feature Based Adaptive Tolerance Tree (FATT), which is designed to bring an effective solution especially for indexing large databases. The proposed indexing scheme is then used along with a query by image content, in order to achieve the ultimate goal from the user point of view that ...

  1. A New Optimized Data Clustering Technique using Cellular Automata and Adaptive Central Force Optimization (ACFO

    Directory of Open Access Journals (Sweden)

    G. Srinivasa Rao

    2015-06-01

    Full Text Available As clustering techniques are gaining more important today, we propose a new clustering technique by means of ACFO and cellular automata. The cellular automata uniquely characterizes the condition of a cell at a specific moment by employing the data like the conditions of a reference cell together with its adjoining cell, total number of cells, restraint, transition function and neighbourhood calculation. With an eye on explaining the condition of the cell, morphological functions are executed on the image. In accordance with the four stages of the morphological process, the rural and the urban areas are grouped separately. In order to steer clear of the stochastic turbulences, the threshold is optimized by means of the ACFO. The test outcomes obtained vouchsafe superb performance of the innovative technique. The accomplishment of the new-fangled technique is assessed by using additional number of images and is contrasted with the traditional methods like CFO (Central Force Optimization and PSO (Particle Swarm Optimization.

  2. Scrap Cans Assayed in 55-Gallon Drums by Adapted Q2 Technique

    Energy Technology Data Exchange (ETDEWEB)

    Salaymeh, S.R.

    2001-07-24

    This report describes an alternate assay technique developed to perform batch nondestructive assay (NDA) of ten scrap cans at a time. This report also discusses and compares the results of the one batch of ten scrap cans by assaying them individually at the 324-M assay station with the alternate assay technique developed to perform batch NDA of ten scrap cans at a time using the Q2.

  3. Pattern-recognition techniques applied to performance monitoring of the DSS 13 34-meter antenna control assembly

    Science.gov (United States)

    Mellstrom, J. A.; Smyth, P.

    1991-01-01

    The results of applying pattern recognition techniques to diagnose fault conditions in the pointing system of one of the Deep Space network's large antennas, the DSS 13 34-meter structure, are discussed. A previous article described an experiment whereby a neural network technique was used to identify fault classes by using data obtained from a simulation model of the Deep Space Network (DSN) 70-meter antenna system. Described here is the extension of these classification techniques to the analysis of real data from the field. The general architecture and philosophy of an autonomous monitoring paradigm is described and classification results are discussed and analyzed in this context. Key features of this approach include a probabilistic time-varying context model, the effective integration of signal processing and system identification techniques with pattern recognition algorithms, and the ability to calibrate the system given limited amounts of training data. Reported here are recognition accuracies in the 97 to 98 percent range for the particular fault classes included in the experiments.

  4. A novel technique of in situ phase-shift interferometry applied for faint dissolution of bulky montmorillonite in alkaline solution

    International Nuclear Information System (INIS)

    The effect of alkaline pH on the dissolution rate of bulky aggregated montmorillonite samples at 23°C was investigated for the first time by using an enhanced phase-shift interferometry technique combined with an internal refraction interferometry method developed for this study. This technique was applied to provide a molecular resolution during the optical observation of the dissolution phenomena in real time and in situ while remaining noninvasive. A theoretical normal resolution limit of this technique was 0.78 nm in water for opaque material, but was limited to 6.6 nm for montmorillonite due to the transparency of the montmorillonite crystal. Normal dissolution velocities as low as 1 × 10-4 to 1 × 10-3 nm/s were obtained directly by using the measured temporal change in height of montmorillonite samples set in a reaction cell. The molar dissolution fluxes of montmorillonite obtained in this study gave considerably faster dissolution rates in comparison to those obtained in previous investigations by solution analysis methods. The pH dependence of montmorillonite dissolution rate determined in this study was qualitatively in good agreement with those reported in the previous investigations. The dissolution rates to be used in safety assessments of geological repositories for radioactive wastes should be obtained for bulky samples. This goal has been difficult to achieve using conventional powder experiment technique and solution analysis method, but has been shown to be feasible using the enhanced phase-shift interferometry. (author)

  5. Técnicas quirúrgicas periodontales aplicadas a la implantología Periodontal surgical techniques applied to implantology

    Directory of Open Access Journals (Sweden)

    L Mateos

    2003-08-01

    Full Text Available La similitud morfológica y funcional existente entre los tejidos periimplantarios y los tejidos periodontales ha permitido adaptar técnicas de uso habitual en periodoncia al campo de la implantologia. El manejo de los tejidos periimplantarios de forma correcta buscando como objetivo el mejorar el entorno periimplantario, tanto con fines estéticos como para facilitar el correcto mantenimiento, es una práctica habitual hoy en día en la terapia implantológica. El objetivo de este artículo es revisar la bibliografía referente a estos conceptos y las dístintas técnicas quirúrgicas empleadas en la terapia periodontal que han sido aplicadas en implantologia.Both periodontal and periimplant tissues share morphological and functional characteristics. This allows adapting common used periodontal techniques to the implantology. Nowadays, it is a normal practice to manage the soft periimplant tissues in a correct way, in order to improve the periimplant environment. The aim of this article is to make a literature review of all these concepts as well as the application of some periodontal techniques to the field of the implantology.

  6. Low Bit-Rate Image Compression using Adaptive Down-Sampling technique

    OpenAIRE

    V.Swathi; Prof. K ASHOK BABU

    2011-01-01

    In this paper, we are going to use a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass pre-filtering. The resulting down-sampled pre-filtered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then up-converts it to the original resolut...

  7. Adaptive Analog-to-Digital Conversion and pre-correlation Interference Mitigation Techniques in a GNSS receiver

    OpenAIRE

    Lotz, Thorsten

    2008-01-01

    The objective of this diploma thesis was the development of pre-correlation interference mitigation techniques for a GNSS receiver. Since these developed algorithms shall be implemented in the real world DLR Galileo receiver, some pre-defined parameters were given that respects specifics of the hardware on which these algorithms shall work. So, a ideal 20 dB AGC and a ideal 14 bit ADC were available for the adaptive A/D-conversion, gain steering should be performed on the ADC o...

  8. Low Bit-Rate Image Compression using Adaptive Down-Sampling technique

    Directory of Open Access Journals (Sweden)

    V.Swathi

    2011-09-01

    Full Text Available In this paper, we are going to use a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass pre-filtering. The resulting down-sampled pre-filtered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then up-converts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass pre-filtering. The proposed compression approach of collaborative adaptive down-sampling and up-conversion (CADU outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that over-sampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

  9. Adaptive critic learning techniques for engine torque and air-fuel ratio control.

    Science.gov (United States)

    Liu, Derong; Javaherian, Hossein; Kovalenko, Olesia; Huang, Ting

    2008-08-01

    A new approach for engine calibration and control is proposed. In this paper, we present our research results on the implementation of adaptive critic designs for self-learning control of automotive engines. A class of adaptive critic designs that can be classified as (model-free) action-dependent heuristic dynamic programming is used in this research project. The goals of the present learning control design for automotive engines include improved performance, reduced emissions, and maintained optimum performance under various operating conditions. Using the data from a test vehicle with a V8 engine, we developed a neural network model of the engine and neural network controllers based on the idea of approximate dynamic programming to achieve optimal control. We have developed and simulated self-learning neural network controllers for both engine torque (TRQ) and exhaust air-fuel ratio (AFR) control. The goal of TRQ control and AFR control is to track the commanded values. For both control problems, excellent neural network controller transient performance has been achieved.

  10. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    Science.gov (United States)

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant. PMID:26321351

  11. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    Science.gov (United States)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  12. Base isolation technique for tokamak type fusion reactor using adaptive control

    International Nuclear Information System (INIS)

    In this paper relating to the isolation device of heavy structure such as nuclear fusion reactor, a control rule for reducing the response acceleration and relative displacement simultaneously was formulated, and the aseismic performance was improved by employing the adaptive control method of changing the damping factors of the system adaptively every moment. The control rule was studied by computer simulation, and the aseismic effect was evaluated in an experiment employing a scale model. As a results, the following conclusions were obtained. (1) By employing the control rule presented in this paper, both absolute acceleration and relative displacement can be reduced simultaneously without making the system unstable. (2) By introducing this control rule in a scale model assuming the Tokamak type fusion reactor, the response acceleration can be suppressed down to 78 % and also the relative displacement to 79 % as compared with the conventional aseismic method. (3) The sensitivities of absolute acceleration and relative displacement with respect to the control gain are not equal. However, by employing the relative weighting factor between the absolute acceleration and relative displacement, it is possible to increase the control capability for any kind of objective structures and appliances. (author)

  13. Applying a nonlinear, pitch-catch, ultrasonic technique for the detection of kissing bonds in friction stir welds.

    Science.gov (United States)

    Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen

    2016-05-01

    Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw. PMID:26921559

  14. Applying a nonlinear, pitch-catch, ultrasonic technique for the detection of kissing bonds in friction stir welds.

    Science.gov (United States)

    Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen

    2016-05-01

    Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw.

  15. Occlusion Culling Algorithm Using Prefetching and Adaptive Level of Detail Technique

    Institute of Scientific and Technical Information of China (English)

    ZHENG Fu-ren; ZHAN Shou-yi; Yang Bing

    2006-01-01

    A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered. View-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value.

  16. Multivariate class modeling techniques applied to multielement analysis for the verification of the geographical origin of chili pepper.

    Science.gov (United States)

    Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio

    2016-09-01

    Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%). PMID:27041319

  17. 2D and 3D optical diagnostic techniques applied to Madonna dei Fusi by Leonardo da Vinci

    Science.gov (United States)

    Fontana, R.; Gambino, M. C.; Greco, M.; Marras, L.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.; Sanapo, C.

    2005-06-01

    3D measurement and modelling have been traditionally applied to statues, buildings, archeological sites or similar large structures, but rarely to paintings. Recently, however, 3D measurements have been performed successfully also on easel paintings, allowing to detect and document the painting's surface. We used 3D models to integrate the results of various 2D imaging techniques on a common reference frame. These applications show how the 3D shape information, complemented with 2D colour maps as well as with other types of sensory data, provide the most interesting information. The 3D data acquisition was carried out by means of two devices: a high-resolution laser micro-profilometer, composed of a commercial distance meter mounted on a scanning device, and a laser-line scanner. The 2D data acquisitions were carried out using a scanning device for simultaneous RGB colour imaging and IR reflectography, and a UV fluorescence multispectral image acquisition system. We present here the results of the techniques described, applied to the analysis of an important painting of the Italian Reinassance: `Madonna dei Fusi', attributed to Leonardo da Vinci.

  18. Multivariate class modeling techniques applied to multielement analysis for the verification of the geographical origin of chili pepper.

    Science.gov (United States)

    Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio

    2016-09-01

    Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%).

  19. Path Integral Molecular Dynamics within the Grand Canonical-like Adaptive Resolution Technique: Quantum-Classical Simulation of Liquid Water

    CERN Document Server

    Agarwal, Animesh

    2015-01-01

    Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however computationally this technique is very demanding. The abovementioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One possible solution to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this ...

  20. CLUSTERING BASED ADAPTIVE IMAGE COMPRESSION SCHEME USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    M.Mohamed Ismail,

    2010-10-01

    Full Text Available This paper presents an image compression scheme with particle swarm optimization technique for clustering. The PSO technique is a powerful general purpose optimization technique that uses the concept of fitness.It provides a mechanism such that individuals in the swarm communicate and exchange information which is similar to the social behaviour of insects & human beings. Because of the mimicking the social sharing of information ,PSO directs particle to search the solution more efficiently.PSO is like a GA in that the population isinitialized with random potential solutions.The adjustment towards the best individual experience (PBEST and the best social experience (GBEST.Is conceptually similar to the cross over operaton of the GA.However it is unlike a GA in that each potential solution , called a particle is flying through the solution space with a velocity.Moreover the particles and the swarm have memory,which does not exist in the populatiom of GA.This optimization technique is used in Image compression and better results have obtained in terms of PSNR, CR and the visual quality of the image when compared to other existing methods.

  1. Problems on holographic imaging technique and adapt lasers for bubble chambers

    International Nuclear Information System (INIS)

    Different types of holographic recording technique for bubble chambers are presented and compared. The influence of turbulence on resolution is discussed as well as the demand on laser equipment. Experiments on a test model of HOLEBC using a pulsed ruby laser are also presented. (orig.)

  2. Problems on holographic imaging technique and adapt lasers for bubble chambers

    CERN Document Server

    Bjelkhagen, H I

    1982-01-01

    Different types of holographic recording technique for bubble chambers are presented and compared. The influence of turbulence on resolution is discussed as well as the demand on laser equipment. Experiments on a test model of HOLEBC using a pulsed ruby laser are also presented.

  3. Design of a Stability Augmentation System for an Unmanned Helicopter Based on Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    Shouzhao Sheng

    2015-09-01

    Full Text Available The task of control of unmanned helicopters is rather complicated in the presence of parametric uncertainties and measurement noises. This paper presents an adaptive model feedback control algorithm for an unmanned helicopter stability augmentation system. The proposed algorithm can achieve a guaranteed model reference tracking performance and speed up the convergence rates of adjustable parameters, even when the plant parameters vary rapidly. Moreover, the model feedback strategy in the algorithm further contributes to the improvement in the control quality of the stability augmentation system in the case of low signal to noise ratios, mainly because the model feedback path is noise free. The effectiveness and superiority of the proposed algorithm are demonstrated through a series of tests.

  4. Adaptive Double Threshold with Multiple Energy Detection Technique in Cognitive Radio

    Directory of Open Access Journals (Sweden)

    J. Avila

    2015-08-01

    Full Text Available The Cognitive Radio (CR network is a system which lends help at the time of scarcity in spectrum. One of the process by which CR senses the spectrum is energy detection method with a fixed single threshold. When energy levels fall below the threshold, secondary user is permitted to use the spectrum of the primary user. It is shown by simulation results that having two levels of threshold i.e., double threshold improves performance by giving importance to one of the major aspects of CR, reducing the confliction of the primary and the secondary user. For enhanced performance under noise conditions, dynamic allocation or adaptive threshold is employed with the two levels of threshold. The system is made better by the use of multiple energy detectors on the reception end.

  5. Applied research on air pollution using nuclear-related analytical techniques. Report on the second research co-ordination meeting

    International Nuclear Information System (INIS)

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which started in 1992, and is scheduled to run until early 1997. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XRF, and PIXE for the analysis of toxic and other trace elements in air particulate matter. The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for research and monitoring studies on air pollution, ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural area). This document reports the discussions held during the second Research Co-ordination Meeting (RCM) for the CRP which took place at ANSTO in Menai, Australia. (author)

  6. Adaptive, multi-domain techniques for two-phase flow computations

    Science.gov (United States)

    Uzgoren, Eray

    Computations of immiscible two-phase flows deal with interfaces that may move and/or deform in response to the dynamics within the flow field. As interfaces move, one needs to compute the new shapes and the associated geometric information (such as curvatures, normals, and projected areas/volumes) as part of the solution. The present study employs the immersed boundary method (IBM), which uses marker points to track the interface location and continuous interface methods to model interfacial conditions. The large transport property jumps across the interface, and the considerations of the mechanism including convection, diffusion, pressure, body force and surface tension create multiple time/length scales. The resulting computational stiffness and moving boundaries make numerical simulations computationally expensive in three-dimensions, even when the computations are performed on adaptively refined 3D Cartesian grids that efficiently resolve the length scales. A domain decomposition method and a partitioning strategy for adaptively refined grids are developed to enable parallel computing capabilities. Specifically, the approach consists of multilevel additive Schwarz method for domain decomposition, and Hilbert space filling curve ordering for partitioning. The issues related to load balancing, communication and computation, convergence rate of the iterative solver in regard to grid size and the number of sub-domains and interface shape deformation, are studied. Moreover, interfacial representation using marker points is extended to model complex solid geometries for single and two-phase flows. Developed model is validated using a benchmark test case, flow over a cylinder. Furthermore, overall algorithm is employed to further investigate steady and unsteady behavior of the liquid plug problem. Finally, capability of handling two-phase flow simulations in complex solid geometries is demonstrated by studying the effect of bifurcation point on the liquid plug, which

  7. Adaptive proactive reconfiguration: a technique for process variability and aging aware SRAM cache design

    OpenAIRE

    Pouyan, Peyman; Amat Bertran, Esteve; Rubio Sola, Jose Antonio

    2014-01-01

    Nanoscale circuits are subject to a wide range of new limiting phenomena making essential to investigate new design strategies at the circuit and architecture level to improve its performance and reliability. Proactive reconfiguration is an emerging technique oriented to extend the system lifetime of memories affected by aging. In this brief, we present a new approach for static random access memory (SRAM) design that extends the cache lifetime when considering process variation and aging in ...

  8. Adaptive Techniques for Minimizing Middleware Memory Footprint for Distributed, Real-Time, Embedded Systems

    OpenAIRE

    Panahi, Mark; Harmon, Trevor; Klefstad, Raymond

    2003-01-01

    In order for middleware to be widely useful for distributed, real-time, and embedded systems, it should provide a full set of services and be easily customizable to meet the memory footprint limitations of embedded systems. In this paper, we examine a variety of techniques used to reduce memory footprint in middleware. We found that combining aspect-oriented programming with code shrinkers and obfuscators reduces the memory footprint of CORBA middleware to

  9. Different perceptions of adaptation to climate change: a mental model approach applied to the evidence from expert interviews

    NARCIS (Netherlands)

    Otto-Banaszak, I.; Matczak, P.; Wesseler, J.H.H.; Wechsung, F.

    2011-01-01

    We argue that differences in the perception and governance of adaptation to climate change and extreme weather events are related to sets of beliefs and concepts through which people understand the environment and which are used to solve the problems they face (mental models). Using data gathered in

  10. A framework for automated contour quality assurance in radiation therapy including adaptive techniques

    International Nuclear Information System (INIS)

    Contouring of targets and normal tissues is one of the largest sources of variability in radiation therapy treatment plans. Contours thus require a time intensive and error-prone quality assurance (QA) evaluation, limitations which also impair the facilitation of adaptive radiotherapy (ART). Here, an automated system for contour QA is developed using historical data (the ‘knowledge base’). A pilot study was performed with a knowledge base derived from 9 contours each from 29 head-and-neck treatment plans. Size, shape, relative position, and other clinically-relevant metrics and heuristically derived rules are determined. Metrics are extracted from input patient data and compared against rules determined from the knowledge base; a computer-learning component allows metrics to evolve with more input data, including patient specific data for ART. Nine additional plans containing 42 unique contouring errors were analyzed. 40/42 errors were detected as were 9 false positives. The results of this study imply knowledge-based contour QA could potentially enhance the safety and effectiveness of RT treatment plans as well as increase the efficiency of the treatment planning process, reducing labor and the cost of therapy for patients. (paper)

  11. A Fast Block-Matching Algorithm Using Smooth Motion Vector Field Adaptive Search Technique

    Institute of Scientific and Technical Information of China (English)

    LI Bo(李波); LI Wei(李炜); TU YaMing(涂亚明)

    2003-01-01

    In many video standards based on inter-frame compression such as H.26x and MPEG, block-matching algorithm has been widely adopted as the method for motion estimation because of its simplicity and effectiveness. Nevertheless, since motion estimation is very complex in computing. Fast algorithm for motion estimation has always been an important and attractive topic in video compression. From the viewpoint of making motion vector field smoother, this paper proposes a new algorithm SMVFAST. On the basis of motion correlation, it predicts the starting point by neighboring motion vectors according to their SADs. Adaptive search modes are usedin its search process through simply classifying motion activity. After discovering the ubiquitous ratio between the SADs of the collocated blocks in the consecutive frames, the paper proposes an effective half-stop criterion that can quickly stop the search process with good enough results.Experiments show that SMVFAST obtains almost the same results as the full search at very low computation cost, and outperforms MVFAST and PMVFAST in speed and quality, which are adopted by MPEG-4.

  12. Application of subband adaptive filtering techniques to ultrasonic detection in multilayers

    Institute of Scientific and Technical Information of China (English)

    MAO Jie; LI Mingxuan

    2003-01-01

    The ultrasonic testing for the defects of complete disbond in multi-layered structure with lower acoustic impedance beneath a high acoustic impedance overburden is one of the difficult problems in ultrasonic nondestructive testing field. A model of a multi-layered steel-rubber composite plate is depicted. Because the acoustic impedance of the steel differs far from that of the couplant water and the rubber, the energy of the signal reflected from the debonded rubber layers is very weak. More over, the flaw echoes are masked by the strong echoes reverberated in the steel plate. It's nearly impossible to identify the debonding echoes directly. The subband adaptive filtering method is discussed in the paper, where the subband decomposition is performed by mutual wavelet packets decomposition on the criterion of maximizing the cross-correlation between the signals. The simulations on both synthetic and real signals are presented. The echoes from the delaminated flaw at the depth of 5 mm in the rubber from the calculated signal, and echoes from the flaw at the depth of 3 mm from the real signal are extracted successfully.

  13. A Novel Grid Impedance Estimation Technique based on Adaptive Virtual Resistance Control Loop Applied to Distributed Generation Inverters

    DEFF Research Database (Denmark)

    Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem;

    2013-01-01

    The penetration of the distributed power generation systems (DPGSs) based on renewable sources (PV, WT) is strongly dependent on the quality of the power injected to the utility grid. However, the grid impedance variation, mainly caused by grid faults somewhere in the electric network, can degrade...... the power quality and even damage some sensitive loads connected at the point of the common coupling (PCC). This paper presents detection-estimation method of the grid impedance variation. This estimation tehnique aims to improve the dynamic of the distributed generation (DG) interfacing inverter control...... and to take the decision of either keep the DG connected, or disconnect it from the utility grid. The proposed method is based on a fast and easy grid fault detection method. A virtual damping resistance is used to drive the system to the resonance in order to extract the grid impedance parameters, both...

  14. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Science.gov (United States)

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.

  15. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Science.gov (United States)

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images. PMID:26890880

  16. Analysis of Adaptive Fuzzy Technique for Multiple Crack Diagnosis of Faulty Beam Using Vibration Signatures

    Directory of Open Access Journals (Sweden)

    Amiya Kumar Dash

    2013-01-01

    Full Text Available This paper discusses the multicrack detection of structure using fuzzy Gaussian technique. The vibration parameters derived from the numerical methods of the cracked cantilever beam are used to set several fuzzy rules for designing the fuzzy controller used to predict the crack location and depth. Relative crack locations and relative crack depths are the output parameters from the fuzzy inference system. The method proposed in the current analysis is used to evaluate the dynamic response of cracked cantilever beam. The results of the proposed method are in good agreement with the results obtained from the developed experimental setup.

  17. Comparative Analysis of Linear and Nonlinear Pattern Synthesis of Hemispherical Antenna Array Using Adaptive Evolutionary Techniques

    Directory of Open Access Journals (Sweden)

    K. R. Subhashini

    2014-01-01

    synthesis is termed as the variation in the element excitation amplitude and nonlinear synthesis is process of variation in element angular position. Both ADE and AFA are a high-performance stochastic evolutionary algorithm used to solve N-dimensional problems. These methods are used to determine a set of parameters of antenna elements that provide the desired radiation pattern. The effectiveness of the algorithms for the design of conformal antenna array is shown by means of numerical results. Comparison with other methods is made whenever possible. The results reveal that nonlinear synthesis, aided by the discussed techniques, provides considerable enhancements compared to linear synthesis.

  18. Conception et adaptation de services techniques pour l'informatique ubiquitaire et nomade

    OpenAIRE

    Lecomte, Sylvain

    2005-01-01

    Depuis la fin des années 1990, le développement des terminaux nomades et des réseaux sans fil s'est considérablement accéléré. Cela a provoqué l'apparition de nouvelles applications, très largement réparties, et offrant de nouveaux services, aussi bien aux usagers (applications de commerce électronique, télévision interactive, applications de proximité), qu'aux entreprises (développement du commerce B2B). Avec l'apparition de ces nouvelles applications, les services techniques, qui prennent e...

  19. Nuclear power plant status diagnostics using simulated condensation: An auto-adaptive computer learning technique

    International Nuclear Information System (INIS)

    The application of artificial neural network concepts to engineering analysis involves training networks, and therefore computers, to perform pattern classification or function mapping tasks. This training process requires the near optimization of network inter-neural connections. A new method for the stochastic optimization of these interconnections is presented in this dissertation. The new approach, called simulated condensation, is applied to networks of generalized, fully interconnected, continuous preceptrons. Simulated condensation optimizes the nodal bias, gain, and output activation constants as well as the usual interconnection weights. In this work, the simulated condensation network paradigm is applied to nuclear power plant operating status recognition. A set of standard problems such as the exclusive-or problem and others are also analyzed as benchmarks for the new methodology. The objective of the nuclear power plant accidient condition diagnosis effort is to train a network to identify both safe and potentially unsafe power plant conditions based on real time plant data. The data is obtained from computer generated accident scenarios. A simulated condensation network is trained to recognize seven nuclear power plant accident conditions as well as the normal full power operating condition. These accidents include, hot and cold leg loss of coolant, control rod ejection and steam generator tube leak accidents. Twenty-seven plant process variables are used as input to the neural network. Results show the feasibility of using simulated condensation as a method for diagnosing nuclear power plant conditions. The method is general and can easily be applied to other types of plants and plant processes

  20. Changes in elongation of falx cerebri during craniosacral therapy techniques applied on the skull of an embalmed cadaver.

    Science.gov (United States)

    Kostopoulos, D C; Keramidas, G

    1992-01-01

    Craniosacral therapy supports that light forces applied to the skull may be transmitted to the dura membrane having a therapeutic effect to the cranial system. This study examines the changes in elongation of falx cerebri during the application of some of the craniosacral therapy techniques to the skull of an embalmed cadaver. The study demonstrates that the relative elongation of the falx cerebri changes as follows: for the frontal lift, 1.44 mm; for the parietal lift, 1.08 mm; for the sphenobasilar compression, -0.33 mm; for the sphenobasilar decompression, 0.28 mm; and for the ear pull, inconclusive results. The present study offers validation for the scientific basis of craniosacral therapy and the contention for cranial suture mobility. PMID:1302656

  1. Analysis of Arbitrary Reflector Antennas Applying the Geometrical Theory of Diffraction Together with the Master Points Technique

    Directory of Open Access Journals (Sweden)

    María Jesús Algar

    2013-01-01

    Full Text Available An efficient approach for the analysis of surface conformed reflector antennas fed arbitrarily is presented. The near field in a large number of sampling points in the aperture of the reflector is obtained applying the Geometrical Theory of Diffraction (GTD. A new technique named Master Points has been developed to reduce the complexity of the ray-tracing computations. The combination of both GTD and Master Points reduces the time requirements of this kind of analysis. To validate the new approach, several reflectors and the effects on the radiation pattern caused by shifting the feed and introducing different obstacles have been considered concerning both simple and complex geometries. The results of these analyses have been compared with the Method of Moments (MoM results.

  2. New analytical expressions of the Rossiter-McLaughlin effect adapted to different observation techniques

    CERN Document Server

    Boué, Gwenaël; Boisse, Isabelle; Oshagh, Mahmoudreza; Santos, Nuno C

    2012-01-01

    The Rossiter-McLaughlin (hereafter RM) effect is a key tool for measuring the projected spin-orbit angle between stellar spin axes and orbits of transiting planets. However, the measured radial velocity (RV) anomalies produced by this effect are not intrinsic and depend on both instrumental resolution and data reduction routines. Using inappropriate formulas to model the RM effect introduces biases, at least in the projected velocity Vsin(i) compared to the spectroscopic value. Currently, only the iodine cell technique has been modeled, which corresponds to observations done by, e.g., the HIRES spectrograph of the Keck telescope. In this paper, we provide a simple expression of the RM effect specially designed to model observations done by the Gaussian fit of a cross-correlation function (CCF) as in the routines performed by the HARPS team. We derived also a new analytical formulation of the RV anomaly associated to the iodine cell technique. For both formulas, we modeled the subplanet mean velocity v_p and d...

  3. Vapor pressure data for fatty acids obtained using an adaptation of the DSC technique

    Energy Technology Data Exchange (ETDEWEB)

    Matricarde Falleiro, Rafael M. [LPT, Departamento de Processos Quimicos (DPQ), Faculdade de Engenharia Quimica, Universidade de Campinas (UNICAMP), 13083-852 Campinas - SP (Brazil); Akisawa Silva, Luciana Y. [Departamento de Ciencias Exatas e da Terra, Universidade Federal de Sao Paulo (UNIFESP), 09972-270 Diadema - SP (Brazil); Meirelles, Antonio J.A. [EXTRAE, Departamento de Engenharia de Alimentos (DEA), Faculdade de Engenharia de Alimentos, Universidade de Campinas (UNICAMP), 13083-862 Campinas - SP (Brazil); Kraehenbuehl, Maria A., E-mail: mak@feq.unicamp.br [LPT, Departamento de Processos Quimicos (DPQ), Faculdade de Engenharia Quimica, Universidade de Campinas (UNICAMP), 13083-852 Campinas - SP (Brazil)

    2012-11-10

    Highlights: Black-Right-Pointing-Pointer Vapor pressure data of fatty acids were measured by Differential Scanning Calorimetry. Black-Right-Pointing-Pointer The DSC technique is especially advantageous for expensive chemicals. Black-Right-Pointing-Pointer High heating rate was used for measuring the vapor pressure data. Black-Right-Pointing-Pointer Antoine constants were obtained for the selected fatty acids. - Abstract: The vapor pressure data for lauric (C{sub 12:0}), myristic (C{sub 14:0}), palmitic (C{sub 16:0}), stearic (C{sub 18:0}) and oleic (C{sub 18:1}) acids were obtained using Differential Scanning Calorimetry (DSC). The adjustments made in the experimental procedure included the use of a small sphere (tungsten carbide) placed over the pinhole of the crucible (diameter of 0.8 mm), making it possible to use a faster heating rate than that of the standard method and reducing the experimental time. The measurements were made in the pressure range from 1333 to 9333 Pa, using small sample quantities of fatty acids (3-5 mg) at a heating rate of 25 K min{sup -1}. The results showed the effectiveness of the technique under study, as evidenced by the low temperature deviations in relation to the data reported in the literature. The Antoine constants were fitted to the experimental data whose values are shown in Table 5.

  4. A Methods and procedures to apply probabilistic safety Assessment (PSA) techniques to the cobalt-therapy process. Cuban experience

    International Nuclear Information System (INIS)

    This paper presents the results of the Probabilistic Safety Analysis (PSA) to the Cobalt Therapy Process, which was performed as part of the International Atomic Energy Agency's Coordinated Research Project (CRP) to Investigate Appropriate Methods and Procedures to Apply Probabilistic Safety Assessment (PSA) Techniques to Large Radiation Sources. The primary methodological tools used in the analysis were Failure Modes and Effects Analysis (FMEA), Event Trees and Fault Trees. These tools were used to evaluate occupational, public and medical exposures during cobalt therapy treatment. The emphasis of the study was on the radiological protection of patients. During the course of the PSA, several findings were analysed concerning the cobalt treatment process. In relation with the Undesired Events Probabilities, the lowest exposures probabilities correspond to the public exposures during the treatment process (Z21); around 10-10 per year, being the workers exposures (Z11); around 10-4 per year. Regarding to the patient, the Z33 probabilities prevail (not desired dose to normal tissue) and Z34 (not irradiated portion to target volume). Patient accidental exposures are also classified in terms of the extent to which the error is likely to affect individual treatments, individual patients, or all the patients treated on a specific unit. Sensitivity analyses were realised to determine the influence of certain tasks or critical stages on the results. As a conclusion the study establishes that the PSA techniques may effectively and reasonably determine the risk associated to the cobalt-therapy treatment process, though there are some weaknesses in its methodological application for this kind of study requiring further research. These weaknesses are due to the fact that the traditional PSA has been mainly applied to complex hardware systems designed to operate with a high automation level, whilst the cobalt therapy treatment is a relatively simple hardware system with a

  5. Adaptive Scheduling Applied to Non-Deterministic Networks of Heterogeneous Tasks for Peak Throughput in Concurrent Gaudi

    CERN Document Server

    AUTHOR|(CDS)2070032; Clemencic, Marco

    As much the e-Science revolutionizes the scientific method in its empirical research and scientific theory, as it does pose the ever growing challenge of accelerating data deluge. The high energy physics (HEP) is a prominent representative of the data intensive science and requires scalable high-throughput software to be able to cope with associated computational endeavors. One such striking example is $\\text G\\rm \\small{AUDI}$ -- an experiment independent software framework, used in several frontier HEP experiments. Among them stand ATLAS and LHCb -- two of four mainstream experiments at the Large Hadron Collider (LHC) at CERN, the European Laboratory for Particle Physics. The framework is currently undergoing an architectural revolution aiming at massively concurrent and adaptive data processing. In this work I explore new dimensions of performance improvement for the next generation $\\text G\\rm \\small{AUDI}$. I then propose a complex of generic task scheduling solutions for adaptive and non-intrusive throu...

  6. A Method to Select an Instrument for Measurement of HR-QOL for Cross-Cultural Adaptation Applied to Dermatology

    OpenAIRE

    Adolfo Ga de Tiedra; Joan Mercadal; Xavier Badia; Jose Ma Mascaro; Rafael Lozano

    1998-01-01

    Objective: The objective of this study was to develop a process to obtain an instrument to measure dermatology specific health-related quality of life (HR-QOL), and to adapt it into another culture, namely the Spanish-speaking community. Design and Setting: By consensus, a multi-disciplinary team determined the qualities of an `ideal' questionnaire as follows: need (absence of any such instrument), utility, multi-dimensionality, psychometric development, simplicity, high degree of standardisa...

  7. Imaging techniques applied to quality control of civil manufactured goods obtained starting from ready-to-use mixtures

    Science.gov (United States)

    Bonifazi, Giuseppe; Castaldi, Federica

    2003-05-01

    Concrete materials obtained from the utilization of pre-mixed and ready to use products (central mix-concrete) are more and more used. They represent a big portion of the civil construction market. Such products are used at different scale, ranging from small scale works, as those commonly realized inside and house, an apartment, etc. or at big civil or industrial scale works. In both cases the problem to control the mixtures and the final work is usually realized through the analysis of properly collected samples. Through appropriate sampling it can be derived objective parameters, as size class distribution and composition of the constituting particulate matter, or mechanical characteristics of the sample itself. An important parameter not considered by the previous mentioned approach is "segregation", that is the possibility that some particulate materials migrate preferentially in some zones of the mixtures and/or of the final product. Such a behavior dramatically influences the quality of the product and of the final manufactured good. Actually this behavior is only studied adopting a human based visual approach. Not repeatable analytical procedures or quantitative data processing exist. In this paper a procedure fully based on image processing techniques is described and applied. Results are presented and analyzed with reference to industrial products. A comparison is also made between the new proposed digital imaging based techniques and the analyses usually carried out at industrial laboratory scale for standard quality control.

  8. Conceptual design study and evaluation of an advanced treatment process applying a submerged combustion technique for spent solvents

    International Nuclear Information System (INIS)

    An advanced treatment process based on a submerged combustion technique was proposed for spent solvents and the distillation residues containing transuranium (TRU) nuclides. A conceptual design study and the preliminary cost estimation of the treatment facility applying the process were conducted. Based on the results of the study, the process evaluation on the technical features, such as safety, volume reduction of TRU waste and economics was carried out. The key requirements for practical use were also summarized. It was shown that the process had the features as follows: the simplified treatment and solidification steps will not generate secondary aqueous wastes, the volume of TRU solid waste will be reduced less than one tenth of that of a reference technique (pyrolysis process), and the facility construction cost is less than 1 % of the total construction cost of a future large scale reprocessing plant. As for the low level wastes of calcium phosphate, it was shown that the further removal of β · γ nuclides with TRU nuclides from the wastes would be required for the safety in interim storage and transportation and for the load of shielding. (author)

  9. Applied mathematics

    International Nuclear Information System (INIS)

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed

  10. Applying value stream mapping techniques to eliminate non-value-added waste for the procurement of endovascular stents

    International Nuclear Information System (INIS)

    Objectives: To eliminate non-value-adding (NVA) waste for the procurement of endovascular stents in interventional radiology services by applying value stream mapping (VSM). Materials and methods: The Lean manufacturing technique was used to analyze the process of material and information flow currently required to direct endovascular stents from external suppliers to patients. Based on a decision point analysis for the procurement of stents in the hospital, a present state VSM was drawn. After assessment of the current status VSM and progressive elimination of unnecessary NVA waste, a future state VSM was drawn. Results: The current state VSM demonstrated that out of 13 processes for the procurement of stents only 2 processes were value-adding. Out of the NVA processes 5 processes were unnecessary NVA activities, which could be eliminated. The decision point analysis demonstrated that the procurement of stents was mainly a forecast driven push system. The future state VSM applies a pull inventory control system to trigger the movement of a unit after withdrawal by using a consignment stock. Conclusion: VSM is a visualization tool for the supply chain and value stream, based on the Toyota Production System and greatly assists in successfully implementing a Lean system.

  11. Applying clinically proven human techniques for contraception and fertility to endangered species and zoo animals: a review.

    Science.gov (United States)

    Silber, Sherman J; Barbey, Natalie; Lenahan, Kathy; Silber, David Z

    2013-12-01

    Reversible contraception that does not alter natural behavior is a critical need for managing zoo populations. In addition to reversible contraception, other fertility techniques perfected in humans may be useful, such as in vitro fertilization (IVF) or oocyte and embryo banking for endangered species like amphibians and Mexican wolves (Canis lupus baileyi). Furthermore, the genetics of human fertility can give a better understanding of fertility in more exotic species. Collaborations were established to apply human fertility techniques to the captive population. Reversible vasectomy might be one solution for reversible contraception that does not alter behavior. Reversible approaches to vasectomy, avoiding secondary epididymal disruption, were attempted in South American bush dogs (Speothos venaticus), chimpanzees (Pan troglodytes), gorillas (Gorilla gorilla), Przewalski's horse (Equus przewalski poliakov), and Sika deer (Cervus nippon) in a variety of zoos around the world. These techniques were first perfected in > 4,000 humans before attempting them in zoo animals. In vitro fertilization with gestational surrogacy was used to attempt to break the vicious cycle of hand rearing of purebred orangutans, and egg and ovary vitrification in humans have led to successful gamete banking for Mexican wolves and disappearing amphibians. The study of the human Y chromosome has even explained a mechanism of extinction related to global climate change. The best results with vasectomy reversal (normal sperm counts, pregnancy, and live offspring) were obtained when the original vasectomy was performed "open-ended," so as to avoid pressure-induced epididymal disruption. The attempt at gestational surrogacy for orangutans failed because of severe male infertility and the lack of success with human ovarian hyperstimulation protocols. Vitrification of oocytes is already being employed for the Amphibian Ark Project and for Mexican wolves. Vasectomy can be a reversible contraception

  12. Performance Evaluation of Wimax Physical Layer under Adaptive Modulation Techniques and Communication Channels

    CERN Document Server

    Islam, Md Ashraful; Hasan, Md Zahid

    2009-01-01

    Wimax (Worldwide Interoperability for Microwave Access) is a promising technology which can offer high speed voice, video and data service up to the customer end. The aim of this paper is the performance evaluation of an Wimax system under different combinations of digital modulation (BPSK, QPSK, 4 QAM and 16 QAM) and different communication channels AWGN and fading channels (Rayleigh and Rician). And the Wimax system incorporates Reed Solomon (RS) encoder with Convolutional encoder with half and two third rated codes in FEC channel coding. The simulation results of estimated Bit Error Rate (BER) displays that the implementation of interleaved RS code (255, 239, 8) with two third rated Convolutional code under BPSK modulation technique is highly effective to combat in the Wimax communication system. To complete this performance analysis in Wimax based systems, a segment of audio signal is used for analysis. The transmitted audio message is found to have retrieved effectively under noisy situation.

  13. Path integral molecular dynamics within the grand canonical-like adaptive resolution technique: Simulation of liquid water

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Animesh, E-mail: animesh@zedat.fu-berlin.de; Delle Site, Luigi, E-mail: dellesite@fu-berlin.de [Institute for Mathematics, Freie Universität Berlin, Berlin (Germany)

    2015-09-07

    Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.

  14. Path integral molecular dynamics within the grand canonical-like adaptive resolution technique: Simulation of liquid water

    International Nuclear Information System (INIS)

    Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement

  15. Functional reasoning, explanation and analysis: Part 1: a survey on theories, techniques and applied systems. Part 2: qualitative function formation technique

    International Nuclear Information System (INIS)

    Functional Reasoning (FR) enables people to derive the purpose of objects and explain their functions, JAERI's 'Human Acts Simulation Program (HASP)', started from 1987, has the goal of developing programs of the underlying technologies for intelligent robots by imitating the intelligent behavior of humans. FR is considered a useful reasoning method in HASP and applied to understand function of tools and objects in the Toolbox Project. In this report, first, the results of the diverse FR researches within a variety of disciplines are reviewed and the common core and basic problems are identified. Then the qualitative function formation (QFF) technique is introduced. Some novel points are: extending the common qualitative models to include interactions and timing of events by defining temporal and dependency constraints, and binding it with the conventional qualitative simulation. Function concepts are defined as interpretations of either a persistence or an order in the sequence of states, using the trace of the qualitative state vector derived by qualitative simulation on the extended qualitative model. This offers solution to some of the FR problems and leads to a method for generalization and comparison of functions of different objects. (author) 85 refs

  16. An Adaptive Watermarking Technique for the copyright of digital images and Digital Image Protection

    Directory of Open Access Journals (Sweden)

    Yusuf Perwej

    2012-05-01

    Full Text Available The Internet as a whole does not use secure links, thus information in transit may be vulnerable to interruption as well. The important of reducing a chance of the information being detected during the transmission is being an issue in the real world now days. The Digital watermarking method provides for the quick and inexpensive distribution of digital information over the Internet. This method provides new ways of ensuring the sufficient protection of copyright holders in the intellectual property dispersion process. The property of digital watermarking images allows insertion of additional data in the image without altering the value of the image. This message is hidden in unused visual space in the image and stays below the human visible threshold for the image. Both seek to embed information inside a cover message with little or no degradation of the cover-object. In this paper investigate the following relevant concepts and terminology, history of watermarks and the properties of a watermarking system as well as a type of watermarking and applications. We are proposing edge detection using Gabor Filters. In this paper we are proposed least significant bit (LSB substitution method to encrypt the message in the watermark image file. The benefits of the LSB are its simplicity to embed the bits of the message directly into the LSB plane of cover-image and many techniques using these methods. The LSB does not result in a human perceptible difference because the amplitude of the change is little therefore the human eye the resulting stego image will look identical to the cover image and this allows high perceptual transparency of the LSB. The spatial domain technique LSB substitution it would be able to use a pseudo-random number generatorto determine the pixels to be used for embedding based on a given key. We are using DCT transform watermark algorithms based on robustness. The watermarking robustness have been calculated by the Peak Signal to

  17. An Adaptive Watermarking Technique for the copyright of digital images and Digital Image Protection

    Directory of Open Access Journals (Sweden)

    Yusuf Perwej

    2012-04-01

    Full Text Available The Internet as a whole does not use secure links, thus information in transit may be vulnerable to interruption as well. The important of reducing a chance of the information being detected during the transmission is being an issue in the real world now days. The Digital watermarking method provides for the quick and inexpensive distribution of digital information over the Internet. This method provides new ways of ensuring the sufficient protection of copyright holders in the intellectual property dispersion process. The property of digital watermarking images allows insertion of additional data in the image without altering the value of the image. This message is hidden in unused visual space in the image and stays below the human visible threshold for the image. Both seek to embed information inside a cover message with little or no degradation of the cover-object. In this paper investigate the following relevant concepts and terminology, history of watermarks and the properties of a watermarking system as well as a type of watermarking and applications. We are proposing edge detection using Gabor Filters. In this paper we are proposed least significant bit (LSB substitution method to encrypt the message in the watermark image file. The benefits of the LSB are its simplicity to embed the bits of the message directly into the LSB plane of cover-image and many techniques using these methods. The LSB does not result in a human perceptible difference because the amplitude of the change is little therefore the human eye the resulting stego image will look identical to the cover image and this allows high perceptual transparency of the LSB. The spatial domain technique LSB substitution it would be able to use a pseudo-random number generator to determine the pixels to be used for embedding based on a given key. We are using DCT transform watermark algorithms based on robustness. The watermarking robustness have been calculated by the Peak Signal

  18. Study on adaptable cardiopulmonary resuscitation technique on the train%列车上适应性心肺复苏技术研究

    Institute of Scientific and Technical Information of China (English)

    周娟; 王仙园; 李雪薇; 程琳

    2011-01-01

    [目的]探讨列车上适应性心肺复苏技术.[方法]按照单人操作法在陆地上进行训练,操作者准确掌握之后,在开动的列车上实施心肺复苏,对复苏的结果进行评价,找出列车上复苏失败的原因,研究改进技术方法,采用改进后的技术在开动的列车上再次进行心肺复苏,并评价正确率.[结果]采用陆地训练的技术方法,列车上实施人工呼吸和胸外按压正确率低,与陆地上实施正确率比较,差异有统计学意义(P<0.01);改进后的心肺复苏技术在列车上实施较改进前人工呼吸和胸外心脏按压正确率显著提高(P<0.01).[结论]特定的环境应有相适应的护理技术,改进后的心肺复苏技术能较好地适应列车上抢救需求,初步形成了列车上的心肺复苏技术方案.%Objective: To probe into the adaptable "cardiopulmonary resuscitation (CPR) and emergency cardiovascular (ECC) techniques" on the train. Methods : Based on the 2005 international cardioplmonary resuscitation guidelines for single operation training on the land, after grasping the knowledge accurately, the operator carried the CPR in a moving train. And the effects of the resuscitation were evaluated. And causes of the failed to resuscitate on the train were found. Then to study on improved techniques and methods, to apply the improved techniques and methods to carry out CPR in a moving train again. And the correct rate of the CPR was evaluated. Results: By adopting technical methods of training on the land, the correct rate of carrying out artificial respiration and chest compressions on the train was lower than that of implementation on the land. There was statistical significant difference between them (P<O. 01). The correct rate of carrying out artificial respiration and chest compressions by applying improved method of CPR techniques on the train was higher remarkably than that of before (P<O. 01). Conclusion: Corresponding nursing techniques and

  19. Adaptive digital calibration techniques for narrow band low-IF receivers with on-chip PLL

    Institute of Scientific and Technical Information of China (English)

    Li Juan; Zhang Huajiang; Zhao Feng; Hong Zhiliang

    2009-01-01

    Digital calibration and control techniques for narrow band integrated low-IF receivers with on-chip frequency synthesizer are presented. The calibration and control system, which is adopted to ensure an achievable signal-to-noise ratio and bit error rate, consists of a digitally controlled, high resolution dB-linear automatic gain control (AGC), an inphase (I) and quadrature (Q) gain and phase mismatch calibration, and an automatic frequency calibration (AFC) of a wideband voltage-controlled oscillator in a PLL based frequency synthesizer. The calibration system has a low design complexity with little power and small die area. Simulation results show that the calibration system can enlarge the dynamic range to 72 dB and minimize the phase and amplitude imbalance between I and Q to 0.08° and 0.024 dB, respectively, which means the image rejection ratio is better than 60 dB. In addition, the calibration time of the AFC is 1.12μs only with a reference clock of 100 MHz.

  20. An Improved Character Segmentation Algorithm Based on Local Adaptive Thresholding Technique for Chinese NvShu Documents

    Directory of Open Access Journals (Sweden)

    Yangguang Sun

    2014-06-01

    Full Text Available For the structural characteristics of Chinese NvShu character, by combining the basic idea in LLT local threshold algorithm and introducing the maximal between-class variance algorithm into local windows, an improved character segmentation algorithm based on local adaptive thresholding technique for Chinese NvShu documents was presented in this paper. Because of designing the corresponding correction parameters for the threshold and using secondary search mechanism, our proposed method could not only automatically obtain local threshold, but also avoid the loss of the character image information and improve the accuracy of the character image segmentation. Experimental results demonstrated its capability to reduce the effect of background noise, especially for Chinese NvShu character images with uneven illumination and low contrast

  1. Sorting of pistachio nuts using image processing techniques and an adaptive neural-fuzzy inference system

    Directory of Open Access Journals (Sweden)

    A. R Abdollahnejad Barough

    2016-04-01

    . Finally, a total amount of the second moment (m2 and matrix vectors of image were selected as features. Features and rules produced from decision tree fed into an Adaptable Neuro-fuzzy Inference System (ANFIS. ANFIS provides a neural network based on Fuzzy Inference System (FIS can produce appropriate output corresponding input patterns. Results and Discussion: The proposed model was trained and tested inside ANFIS Editor of the MATLAB software. 300 images, including closed shell, pithy and empty pistachio were selected for training and testing. This network uses 200 data related to these two features and were trained over 200 courses, the accuracy of the result was 95.8%. 100 image have been used to test network over 40 courses with accuracy 97%. The time for the training and testing steps are 0.73 and 0.31 seconds, respectively, and the time to choose the features and rules was 2.1 seconds. Conclusions: In this study, a model was introduced to sort non- split nuts, blank nuts and filled nuts pistachios. Evaluation of training and testing, shows that the model has the ability to classify different types of nuts with high precision. In the previously proposed methods, merely non-split and split pistachio nuts were sorted and being filled or blank nuts is unrecognizable. Nevertheless, accuracy of the mentioned method is 95.56 percent. As well as, other method sorted non-split and split pistachio nuts with an accuracy of 98% and 85% respectively for training and testing steps. The model proposed in this study is better than the other methods and it is encouraging for the improvement and development of the model.

  2. One-step synthesis of hybrid inorganic-organic nanocomposite coatings by novel laser adaptive ablation deposition technique

    Science.gov (United States)

    Serbezov, Valery; Sotirov, Sotir

    2013-03-01

    A novel approach for one-step synthesis of hybrid inorganic-organic nanocomposite coatings by new modification of Pulsed Laser Deposition technology called Laser Adaptive Ablation Deposition (LAAD) is presented. Hybrid nanocomposite coatings including Mg- Rapamycin and Mg- Desoximetasone were produced by UV TEA N2 laser under low vacuum (0.1 Pa) and room temperature onto substrates from SS 316L, KCl and NaCl. The laser fluence for Mg alloy was 1, 8 J/cm2 and for Desoximetasone 0,176 J/cm2 and for Rapamycin 0,118 J/cm2 were respectively. The threedimensional two-segmented single target was used to adapt the interaction of focused laser beam with inorganic and organic material. Magnesium alloy nanoparticles with sizes from 50 nm to 250 nm were obtained in organic matrices. The morphology of nanocomposites films were studied by Bright field / Fluorescence optical microscope and Scanning Electron Microscope (SEM). Fourier Transform Infrared (FTIR) spectroscopy measurements were applied in order to study the functional properties of organic component before and after the LAAD process. Energy Dispersive X-ray Spectroscopy (EDX) was used for identification of Mg alloy presence in hybrid nanocomposites coatings. The precise control of process parameters and particularly of the laser fluence adjustment enables transfer on materials with different physical chemical properties and one-step synthesis of complex inorganic- organic nanocomposites coatings.

  3. Adaptation des techniques de forage à la recherche et à l'équipement des stockages souterrains de gaz naturel Adapting Drilling Techniques to the Search for and Equipment of Underground Natural-Gas Storage Facilities

    Directory of Open Access Journals (Sweden)

    Grandin J.

    2006-11-01

    Full Text Available Pour répondre à la modulation annuelle de la consommation de gaz, le Gaz de France a décidé, il y a une trentaine d'années, de stocker du gaz, soit dans des nappes aquifères, soit dans des cavités salines. La recherche de structures géologiques aptes à recevoir ce gaz, la réalisation et l'équipement de puits d'exploration ou d'exploitation, l'entretien de ces puits et le contrôle des stockages ont été confiés au Département réservoirs souterrains de la Direction des études et techniques nouvelles. Les phases d'exploration et de développement d'un stockage souterrain nécessitent le forage de puits du type pétrolier. Ces sondages permettent, d'une part la récupération d'un maximum de renseignements concernant les différentes couches géologiques traversées et l'évaluation de leur aptitude à stocker le gaz, et d'autre part, d'assurer une exploitation optimale et fiable du stockage dans les meilleures conditions de sécurité. La pratique acquise dans l'exécution de ces forages par le Gaz de France lui a permis d'adapter aux stockages souterrains de nombreuses techniques des forages pétroliers. Parallèlement des procédés originaux ont pu être mis au point pour répondre à certaines exigences particulières du forage des puits de réservoirs souterrains, notamment à celles concernant le bon calibrage du trou foré ou la qualité des cimentations des cuvelages. L'article présente l'ensemble de ces adaptationset pratiques originales; bien que ne prétendant rien d'autre qu'apporter une contribution au chapitre problèmes particuliers des forages pour stockages souterrains de la technique pétrolière, certaines de celles-ci devraient pouvoir, en retour, trouver des applications intéressantes dans les forages pétroliers de moyenne profondeur. About 30 years ago and with a view to keep pace with the annual variation of gas consumption, Gaz de France decided to store gas in either aquifer layers or salt cavities. The

  4. Applying the framework for culturally responsive teaching to explore the adaptations that teach first beginning teachers use to meet the needs of their pupils in school

    Directory of Open Access Journals (Sweden)

    Alison Hramiak

    2015-12-01

    Full Text Available Previous research has shown that beginning teachers are capable of adapting their practice to the needs of ethnically diverse pupils. This paper investigates the possibility that such teachers were developing their practice into what I have termed culturally adaptive teaching. A variety of methods were used to collect qualitative data that focused on the perspectives of teachers in schools across Yorkshire and Humberside, (UK over the course of an academic year. The framework for culturally responsive teaching (CRT was used as a lens through which to analyse the data collected. It enabled findings to emerge that took the framework beyond that of CRT, to one of culturally adaptive teaching. Teachers continually adapted their practice, in terms of cultural sensitivity, to better meet the needs of their pupils. If we can apply this framework and support beginning teachers to help them understand issues of cultural diversity in the classroom, we might be able to engender a real systematic change in teaching for the benefit of pupils.

  5. Estimation of water quality parameters applying satellite data fusion and mining techniques in the lake Albufera de Valencia (Spain)

    Science.gov (United States)

    Doña, Carolina; Chang, Ni-Bin; Vannah, Benjamin W.; Sánchez, Juan Manuel; Delegido, Jesús; Camacho, Antonio; Caselles, Vicente

    2014-05-01

    Linked to the enforcement of the European Water Framework Directive (2000) (WFD), which establishes that all countries of the European Union have to avoid deterioration, improve and retrieve the status of the water bodies, and maintain their good ecological status, several remote sensing studies have been carried out to monitor and understand the water quality variables trend. Lake Albufera de Valencia (Spain) is a hypereutrophic system that can present chrorophyll a concentrations over 200 mg·m-3 and transparency (Secchi disk) values below 20 cm, needing to retrieve and improve its water quality. The principal aim of our work was to develop algorithms to estimate water quality parameters such as chlorophyll a concentration and water transparency, which are informative of the eutrophication and ecological status, using remote sensing data. Remote sensing data from Terra/MODIS, Landsat 5-TM and Landsat 7-ETM+ images were used to carry out this study. Landsat images are useful to analyze the spatial variability of the water quality variables, as well as to monitor small to medium size water bodies due to its 30-m spatial resolution. But, the poor temporal resolution of Landsat, with a 16-day revisit time, is an issue. In this work we tried to solve this data gap by applying fusion techniques between Landsat and MODIS images. Although the lower spatial resolution of MODIS is 250/500-m, one image per day is available. Thus, synthetic Landsat images were created using data fusion for no data acquisition dates. Good correlation values were obtained when comparing original and synthetic Landsat images. Genetic programming was used to develop models for predicting water quality. Using the reflectance bands of the synthetic Landsat images as inputs to the model, values of R2 = 0.94 and RMSE = 8 mg·m-3 were obtained when comparing modeled and observed values of chlorophyll a, and values of R2= 0.91 and RMSE = 4 cm for the transparency (Secchi disk). Finally, concentration

  6. Feasibility to apply the steam assisted gravity drainage (SAGD) technique in the country's heavy crude-oil fields

    International Nuclear Information System (INIS)

    The steam assisted gravity drainage (SAGD) processes are one of the most efficient and profitable technologies for the production of heavy crude oils and oil sands. These processes involve the drilling of a couple of parallel horizontal wells, separated by a vertical distance and located near the oil field base. The upper well is used to continuously inject steam into the zone of interest, while the lower well collects all resulting fluids (oil, condensate and formation water) and takes them to the surface (Butler, 1994). This technology has been successfully implemented in countries such as Canada, Venezuela and United States, reaching recovery factors in excess of 50%. This article provides an overview of the technique's operation mechanism and the process most relevant characteristics, as well as the various categories this technology is divided into, including all its advantages and limitations. Furthermore, the article sets the oil field's minimal conditions under which the SAGD process is efficient, which conditions, as integrated to a series of mathematical models, allow to make forecasts on production, thermal efficiency (ODR) and oil to be recovered, as long as it is feasible (from a technical point of view) to apply this technique to a defined oil field. The information and concepts compiled during this research prompted the development of software, which may be used as an information, analysis and interpretation tool to predict and quantify this technology's performance. Based on the article, preliminary studies were started for the country's heavy crude-oil fields, identifying which provide the minimum conditions for the successful development of a pilot project

  7. Fate of nitrogen in soil-crop system by nuclear techniques. Effects of applied rate of ammonium bicarbonate

    International Nuclear Information System (INIS)

    The experiment was conducted with 15N tracing techniques in Shijiazhuang from 1994 to 1995. Three nitrogen rates, including optimum rate (150 kg/hm2) based on the recommendation of local farmers, above 50% of optimum rate (225 kg/hm2) and below 50% of optimum rate (75 kg/hm2), were selected to study the effect of application rates of ammonium bicarbonate on yield of winter wheat and fate of applied nitrogen under local management and irrigated condition. The results showed that nitrogen uptake and grain yield of wheat under fertilized treatments were higher than those in unfertilized treatment (except 225 kg/hm2 treatment). The highest yield and top dry mater weight (grain 6.80 t/hm2, top 14.70 t/hm2) were obtained in optimum N applied treatment (150 kg/hm2), while the highest nitrogen recovery efficiency (38.5%) of ammonium bicarbonate by winter wheat was found in below 50% of optimum rate treatment (75 kg/hm2) due to the relatively high basic fertility of the field. However, nitrogen recovery efficiency of ammonium bicarbonate decreased with the increasing N application rate. The highest residue of fertilizer N was found in 225 kg/hm2 treatment, and 46% of the residue existed in the top layer of the soil (0∼50 cm). The unaccounted N from fertilizer were 30.20%, 36.56%, 31.25% in 75 kg/hm2, 150 kg/hm2, 225 kg/hm2 treatments respectively according to 15N balance calculation in soil-plant system. The effect of residue N in soil on the next crop, maize, in 225 kg/hm2 treatment was best in three fertilized treatments, suggesting the possibilities of nitrate leaching down in 225 kg/hm2 treatment. (15 tabs.)

  8. Test-retest reliability of a new technique with pressure algometry applied to teeth in healthy Chinese individuals.

    Science.gov (United States)

    Liu, Ran; Gu, Xinyu; Zhang, Jinglu; Yu, Linfeng; Chen, Wenjing; Wang, Kelun; Svensson, Peter

    2016-06-01

    Pressure pain thresholds (PPTs) have been shown to be useful measures of mechanical pain sensitivity in deep tissues. However, clinical methods for measuring mechanical allodynia or hyperalgesia in teeth have not been reported. The aim of this study was to assess the reliability of PPTs in periodontal ligament of healthy Chinese participants. Twenty healthy young adults participated. Pressure pain thresholds were measured at six teeth and in two directions. The tests included three consecutive trials, in two separate sessions, which were performed on the first day by one examiner. After 1-3 wk, an identical protocol was carried out by two examiners, also in two separate sessions. There were no significant differences between repeated measures for all teeth. The PPTs had excellent reliability with high intraclass coefficients (ICCs) across different sessions (ICC: 0.871-0.956), days (ICC: 0.879-0.951), and examiners (ICC: 0.845-0.950). Pressure pain thresholds applied to the teeth have excellent intra- and inter-examiner agreement in healthy participants. This method may be proposed as an easy and reliable technique to assess mechanical pain sensitivity (e.g. mechanical allodynia and hyperalgesia) in the periodontal ligament, which is associated with endodontic or periodontal conditions. PMID:27017942

  9. UQ and V&V techniques applied to experiments and simulations of heated pipes pressurized to failure.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose; Dempsey, J. Franklin; Antoun, Bonnie R.

    2014-05-01

    This report demonstrates versatile and practical model validation and uncertainty quantification techniques applied to the accuracy assessment of a computational model of heated steel pipes pressurized to failure. The Real Space validation methodology segregates aleatory and epistemic uncertainties to form straightforward model validation metrics especially suited for assessing models to be used in the analysis of performance and safety margins. The methodology handles difficulties associated with representing and propagating interval and/or probabilistic uncertainties from multiple correlated and uncorrelated sources in the experiments and simulations including: material variability characterized by non-parametric random functions (discrete temperature dependent stress-strain curves); very limited (sparse) experimental data at the coupon testing level for material characterization and at the pipe-test validation level; boundary condition reconstruction uncertainties from spatially sparse sensor data; normalization of pipe experimental responses for measured input-condition differences among tests and for random and systematic uncertainties in measurement/processing/inference of experimental inputs and outputs; numerical solution uncertainty from model discretization and solver effects.

  10. B2C电子商务中的信息抽取技术%Information Extraction Techniques Applied to B2C E-Commerce

    Institute of Scientific and Technical Information of China (English)

    于琨; 蔡智; 糜仲春; 蔡庆生

    2002-01-01

    After stepping out the valley of net economy, B2C e-commerce is about to come into a climax of develop-ment. Information extraction techniques are going to be one of the most important factors to promote B2C e-com-merce. In this paper, we present a review on the recent progress of information extraction techniques applied to B2Ce-commerce. The characteristics of each technique are also appraised.

  11. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    Science.gov (United States)

    Liu, Zhiyuan; Wang, Changhui

    2015-01-01

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675

  12. Applying a New Adaptive Genetic Algorithm to Study the Layout of Drilling Equipment in Semisubmersible Drilling Platforms

    Directory of Open Access Journals (Sweden)

    Wensheng Xiao

    2015-01-01

    Full Text Available This study proposes a new selection method called trisection population for genetic algorithm selection operations. In this new algorithm, the highest fitness of 2N/3 parent individuals is genetically manipulated to reproduce offspring. This selection method ensures a high rate of effective population evolution and overcomes the tendency of population to fall into local optimal solutions. Rastrigin’s test function was selected to verify the superiority of the method. Based on characteristics of arc tangent function, a genetic algorithm crossover and mutation probability adaptive methods were proposed. This allows individuals close to the average fitness to be operated with a greater probability of crossover and mutation, while individuals close to the maximum fitness are not easily destroyed. This study also analyzed the equipment layout constraints and objective functions of deep-water semisubmersible drilling platforms. The improved genetic algorithm was used to solve the layout plan. Optimization results demonstrate the effectiveness of the improved algorithm and the fit of layout plans.

  13. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    Science.gov (United States)

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  14. Adaptive filtering techniques for gravitational wave interferometric data Removing long-term sinusoidal disturbances and oscillatory transients

    CERN Document Server

    Chassande-Mottin, E

    2001-01-01

    It is known by the experience gained from the gravitational wave detector proto-types that the interferometric output signal will be corrupted by a significant amount of non-Gaussian noise, large part of it being essentially composed of long-term sinusoids with slowly varying envelope (such as violin resonances in the suspensions, or main power harmonics) and short-term ringdown noise (which may emanate from servo control systems, electronics in a non-linear state, etc.). Since non-Gaussian noise components make the detection and estimation of the gravitational wave signature more difficult, a denoising algorithm based on adaptive filtering techniques (LMS methods) is proposed to separate and extract them from the stationary and Gaussian background noise. The strength of the method is that it does not require any precise model on the observed data: the signals are distinguished on the basis of their autocorrelation time. We believe that the robustness and simplicity of this method make it useful for data prep...

  15. Marginal adaptation of class II resin composite restorations using incremental and bulk placement techniques: an ESEM study.

    Science.gov (United States)

    Idriss, S; Habib, C; Abduljabbar, T; Omar, R

    2003-10-01

    This in vitro study compared marginal gap formation in class II resin composite restorations. Forty caries-free extracted molars were prepared in a standardized manner for class II restoration by one of four methods: bulk- or incrementally-placed light-activated resin composite (Amelogen), and bulk- or incrementally-placed chemically activated composite (Rapidfill). The restored teeth, after finishing and polishing, and thermocycling, were examined using environmental scanning electron microscopy. Marginal gap measurements at predetermined facial and lingual margin sites showed no significant differences between the two sites within any of the groups. Both the light- and the chemically-activated restorations showed no significant differences in mean marginal gap sizes whether they were placed by incremental or bulk techniques. Amelogen restorations placed by both methods had significantly larger margin gaps than those of each of the Rapidfill groups (Peffect on the quality of marginal adaptation, both of the chemically activated resin composite restorations produced significantly smaller marginal gaps than both the bulk- and incrementally-placed light-activated composites. PMID:12974860

  16. Adaptive Data Processing Technique for Lidar-Assisted Control to Bridge the Gap between Lidar Systems and Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Schlipf, David; Raach, Steffen; Haizmann, Florian; Cheng, Po Wen; Fleming, Paul; Scholbrock, Andrew, Krishnamurthy, Raghu; Boquet, Mathieu

    2015-12-14

    This paper presents first steps toward an adaptive lidar data processing technique crucial for lidar-assisted control in wind turbines. The prediction time and the quality of the wind preview from lidar measurements depend on several factors and are not constant. If the data processing is not continually adjusted, the benefit of lidar-assisted control cannot be fully exploited, or can even result in harmful control action. An online analysis of the lidar and turbine data are necessary to continually reassess the prediction time and lidar data quality. In this work, a structured process to develop an analysis tool for the prediction time and a new hardware setup for lidar-assisted control are presented. The tool consists of an online estimation of the rotor effective wind speed from lidar and turbine data and the implementation of an online cross correlation to determine the time shift between both signals. Further, initial results from an ongoing campaign in which this system was employed for providing lidar preview for feed-forward pitch control are presented.

  17. A comparison between single exponential smoothing (SES, double exponential smoothing (DES, holt’s (brown and adaptive response rate exponential smoothing (ARRES techniques in forecasting Malaysia population

    Directory of Open Access Journals (Sweden)

    Ahmad Nazim Aimran

    2014-09-01

    Full Text Available This research develops techniques which are helpful in forecasting univariate time series data. The techniques used in this study are Single Exponential Smoothing (SES, Double Exponential Smoothing (DES, Holt’s (Brown and Adaptive Response Rate Exponential Smoothing (ARRES Techniques. For the purpose of this study, secondary data of Malaysia Population covering the period 1957 up to 2013 was obtained from the Department of Statistics Malaysia. From the result obtained, Holt’s method was found to be the best method to forecast the Malaysia population since it produces the lowest Mean Square Error (MSE value which is 38,273.3 compared to 210,480.29 for SES, 38,827.7 for DEB and 209,835.8 for ARRES techniques. Keywords: Univariate, Forecasting, Single Exponential Smoothing, Double Exponential Smoothing, And Adaptive Response Rate Exponential Smoothing, Holt’s (Brown.

  18. Fast and Accurate Practical Positioning Method using Enhanced-Lateration Technique and Adaptive Propagation Model in GSM Mode

    Directory of Open Access Journals (Sweden)

    Mohamed H. Abdel Meniem

    2012-03-01

    Full Text Available In this paper, we consider problem of positioning of mobile phones, different approaches were produced for these targets using GPS, WiFi, GSM, UMTS and other sensors, which exist in today smart phone sensors. Location awareness in gen-eral is emerging a tremendous interest in different fields and scopes. Position is the key element of context awareness. How-ever GPS produces an accurate position, it requires open sky and does not work indoors. We produce an innovative robust tech-nique for positioning which could be applied on terminal-based or network-based architecture. It depends only on Received Sig-nal Strength (RSS and location of Base Transceiver Station (BTS. This work has been completely tested and analyzed in Egypt1 roads using realistic data and commercial android smart phone. In general, all performance evaluation results were good. Mean positioning error was about 120 m in urban and 394 m in rural.

  19. A review of the Match technique as applied to AASE-2/EASOE and SOLVE/THESEO 2000

    Directory of Open Access Journals (Sweden)

    G. A. Morris

    2005-01-01

    Full Text Available We apply the NASA Goddard Trajectory Model to data from a series of ozonesondes to derive ozone loss rates in the lower stratosphere for the AASE-2/EASOE mission (January-March 1992 and for the SOLVE/THESEO 2000 mission (January-March 2000 in an approach similar to Match. Ozone loss rates are computed by comparing the ozone concentrations provided by ozonesondes launched at the beginning and end of the trajectories connecting the launches. We investigate the sensitivity of the Match results to the various parameters used to reject potential matches in the original Match technique. While these filters effectively eliminate from consideration 80% of the matched sonde pairs and >99% of matched observations in our study, we conclude that only a filter based on potential vorticity changes along the calculated back trajectories seems warranted. Our study also demonstrates that the ozone loss rates estimated in Match can vary by up to a factor of two depending upon the precise trajectory paths calculated for each trajectory. As a result, the statistical uncertainties published with previous Match results might need to be augmented by an additional systematic error. The sensitivity to the trajectory path is particularly pronounced in the month of January, for which the largest ozone loss rate discrepancies between photochemical models and Match are found. For most of the two study periods, our ozone loss rates agree with those previously published. Notable exceptions are found for January 1992 at 475K and late February/early March 2000 at 450K, both periods during which we generally find smaller loss rates than the previous Match studies. Integrated ozone loss rates estimated by Match in both of those years compare well with those found in numerous other studies and in a potential vorticity/potential temperature approach shown previously and in this paper. Finally, we suggest an alternate approach to Match using trajectory mapping. This approach uses

  20. Development of a superconductor magnetic suspension and balance prototype facility for studying the feasibility of applying this technique to large scale aerodynamic testing

    Science.gov (United States)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    The basic research and development work towards proving the feasibility of operating an all-superconductor magnetic suspension and balance device for aerodynamic testing is presented. The feasibility of applying a quasi-six-degree-of freedom free support technique to dynamic stability research was studied along with the design concepts and parameters for applying magnetic suspension techniques to large-scale aerodynamic facilities. A prototype aerodynamic test facility was implemented. Relevant aspects of the development of the prototype facility are described in three sections: (1) design characteristics; (2) operational characteristics; and (3) scaling to larger facilities.

  1. [Oxygen-dependent energy deficit as related to the problems of ontogenetic development disorders and human sociobiological adaptation (theoretical and applied aspects)].

    Science.gov (United States)

    Ilyukhina, V A; Kataeva, G V; Korotkov, A D; Chernysheva, E M

    2015-01-01

    The review states and argues theoretical propositions on the pathogenetic role of pre- and perinatal hypoxic-ischemic brain damage in the formation of sustained oxygen-dependent energy deficit underlying in further ontogenesis the following neurobiological abnormalities: a) a decline in the level of health and compensatory-adaptive capacities of the organism, b) disorders of the psycho-speech development and adaptive behavior in children, c) early development of neuropsychic diseases, g) addition of other types of brain energy metabolism (including glucose metabolism) disorders in chronic polyetiologic diseases young and middle-aged individuals. We highlight and theoretically substantiate the integrated physiological parameters of the oxygen-dependent energy deficit types. We address the features of abnormalities in neuroreflectory and neurohumora regulatory mechanisms of the wakefulness level and its vegetative and hemodynamic provision in different types of energy deficit in children with DSMD, ADHD and school maladjustment. The use of the state-of-the-art neuroimaging techniques significantly increased the possibility of the disintegration of regulatory processes and cognitive functions in children with psycho-speech delays and in a wide range of chronic polyetiologic diseases.

  2. Evaluation of image quality and radiation dose by adaptive statistical iterative reconstruction technique level for chest CT examination

    International Nuclear Information System (INIS)

    The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15±22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied. (authors)

  3. Uranium exploration data and techniques applied to the preparation of radioelement maps. Proceedings of a technical committee meeting

    International Nuclear Information System (INIS)

    The report reviews the advantage and pitfalls of using uranium exploration data and techniques as well as other methods for the preparation of radioelement and radon maps for baseline information in environmental studies and monitoring

  4. Prediction of Quality Features in Iberian Ham by Applying Data Mining on Data From MRI and Computer Vision Techniques

    OpenAIRE

    Daniel Caballero; Andrés Caro; Trinidad Perez-Palacios; Pablo G. Rodriguez; Ramón Palacios

    2014-01-01

    This paper aims to predict quality features of Iberian hams by using non-destructive methods of analys is and data mining. Iberian hams were analyzed by Magn etic Resonance Imaging (MRI) and Computer Vision Techniques (CVT) throughout their ripening process and physico-chemical parameters from them were also measured. The obtained data were used to create an initial database. Deductive techniques ofdata mining (multiple linear reg...

  5. Applying the digital-image-correlation technique to measure the deformation of an old building’s column retrofitted with steel plate in an in situ pushover test

    Indian Academy of Sciences (India)

    Shih-Heng Tung; Ming-Hsiang Shih; Wen-Pei Sung

    2014-06-01

    An in situ pushover test is carried out on an old building of Guan-Miao elementary school in south Taiwan. Columns of this building are seismically retrofitted with steel plate. The DIC (digital-image-correlation) technique is used to measure the deformation of the retrofitted column. The result shows that the DIC technique can be successfully applied to measure the relative displacement of the column. Additionally, thismethod leads to the measurement of relative displacements formany points on the column simultaneously. Hence, the column deformation curve, rotation and curvature can be determined using interpolation method. The resulting curvaturediagram reveals that the phenomenon of plastic hinge occurs at about 2% storey drift ratio, and that the DIC technique can be applied to measure column deformation in a full scale in situ test.

  6. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    OpenAIRE

    Wilmar Hernandez

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today’s cars need is presented through several experimental results that show t...

  7. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2007-01-01

    Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that today’s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher’s interest in the fusion of intelligent sensors and optimal signal processing techniques.

  8. Are U. S. Colleges and Universities Applying Marketing Techniques Properly and within the Context of an Overall Marketing Plan?

    Science.gov (United States)

    Goldgehn, Leslie A.

    1990-01-01

    A survey of 791 college admissions officers investigated the use and perceived effectiveness of 15 marketing techniques: publicity; target marketing; market segmentation; advertising; program development; market positioning; market research; access; marketing plan; pricing; marketing committee; advertising research; consultants; marketing audit;…

  9. X-diffraction technique applied for nano system metrology; Tecnica de difracao de raios X aplicada na metrologia de nanossistemas

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Alexei Yu.; Machado, Rogerio; Robertis, Eveline de; Campos, Andrea P.C.; Archanjo, Braulio S.; Gomes, Lincoln S.; Achete, Carlos A., E-mail: okuznetsov@inmetro.gov.b [Instituto Nacional de Metrologia, Normalizacao e Qualidade Industrial (DIMAT/INMETRO), Duque de Caxias, RJ (Brazil). Div. de Metrologia de Materiais

    2009-07-01

    The application of nano materials are fast growing in all industrial sectors, with a strong necessity in nano metrology and normalizing in the nano material area. The great potential of the X-ray diffraction technique in this field is illustrated at the example of metals, metal oxides and pharmaceuticals

  10. Coupling a local adaptive grid refinement technique with an interface sharpening scheme for the simulation of two-phase flow and free-surface flows using VOF methodology

    Science.gov (United States)

    Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis

    2015-11-01

    This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.

  11. Web-Based Adaptive Testing System

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Due to the maturing of Internet technology, the adaptive testing can be utilized in the web-based environment and the examinee can take the test anywhere and any time. The purpose of the research is to apply item response theory (IRT), adaptive testing theory and web-service technique to construct an XML format itembank and a system of web-based adaptive testing (WAT) by the framework of three-tiered client server distance testing.

  12. Breathing adapted radiotherapy for breast cancer: Comparison of free breathing gating with the breath-hold technique

    International Nuclear Information System (INIS)

    Background and purpose: Adjuvant radiotherapy after breast-conserving surgery for breast cancer implies a risk of late cardiac and pulmonary toxicity. This is the first study to evaluate cardiopulmonary dose sparing of breathing adapted radiotherapy (BART) using free breathing gating, and to compare this respiratory technique with voluntary breath-hold. Patients and methods: 17 patients were CT-scanned during non-coached breathing manoeuvre including free breathing (FB), end-inspiration gating (IG), end-expiration gating (EG), deep inspiration breath-hold (DIBH) and end-expiration breath-hold (EBH). The Varian Real-time Position Management system (RPMTM) was used to monitor respiratory movement and to gate the scanner. For each breathing phase, a population based internal margin (IM) was estimated based on average chest wall excursion, and incorporated into an individually optimised three-field mono-isocentric wide tangential photon field treatment plan for each scan. The target included the remaining breast, internal mammary nodes and periclavicular nodes. Results: The mean anteroposterior chest wall excursion during FB was 2.5 mm. For IG and EG, the mean excursions within gating windows were 1.1 and 0.7 mm, respectively, whereas for DIBH and EBH the excursions were 4.1 and 2.6 mm, respectively. For patients with left-sided cancer, the median heart volume receiving more than 50% of the prescription dose was reduced from 19.2% for FB to 2.8% for IG and 1.9% for DIBH, and the median left anterior descending (LAD) coronary artery volume was reduced from 88.9% to 22.4% for IG and 3.6% for DIBH. Simultaneously, the median ipsilateral relative lung volume irradiated to >50% of the prescribed target dose for both right- and left-sided cancers was reduced from 45.6% for FB to 29.5% for IG and 27.7% for DIBH. For EBH and EG, both the irradiated heart, LAD and lung volumes increased compared to FB. Conclusions: This is the first study to demonstrate the dosimetric benefits

  13. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression by...... minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  14. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the...

  15. Digital filtering techniques applied to electric power systems protection; Tecnicas de filtragem digital aplicadas a protecao de sistemas eletricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Brito, Helio Glauco Ferreira

    1996-12-31

    This work introduces an analysis and a comparative study of some of the techniques for digital filtering of the voltage and current waveforms from faulted transmission lines. This study is of fundamental importance for the development of algorithms applied to digital protection of electric power systems. The techniques studied are based on the Discrete Fourier Transform theory, the Walsh functions and the Kalman filter theory. Two aspects were emphasized in this study: Firstly, the non-recursive techniques were analysed with the implementation of filters based on Fourier theory and the Walsh functions. Secondly, recursive techniques were analyzed, with the implementation of the filters based on the Kalman theory and once more on the Fourier theory. (author) 56 refs., 25 figs., 16 tabs.

  16. A spatial model with pulsed releases to compare strategies for the sterile insect technique applied to the mosquito Aedes aegypti

    OpenAIRE

    Evans, T. P. O.; Bishop, S.R.

    2014-01-01

    We present a simple mathematical model to replicate the key features of the sterile insect technique (SIT) for controlling pest species, with particular reference to the mosquito Aedes aegypti, the main vector of dengue fever. The model differs from the majority of those studied previously in that it is simultaneously spatially explicit and involves pulsed, rather than continuous, sterile insect releases. The spatially uniform equilibria of the model are identified and analysed. Simulations a...

  17. Applying Model-Based Techniques for Aerospace Projects in Accordance with DO-178C, DO-331, and DO-333

    OpenAIRE

    Eisemann, Ulrich

    2016-01-01

    The new standard for software development in civil aviation, DO-178C, mainly differs from its predecessor DO-178B, in that it has standard supplements to provide greater scope for using new software development methods. The most important standard supplements are DO-331 on the methods of model-based development and model-based verification and DO-333 on the use of formal methods such as model checking and abstract interpretation. These key software design techniques offer enormous potential f...

  18. wavelet de-noising technique applied to the PLL of a GPS receiver embedded in an observation satellite

    Directory of Open Access Journals (Sweden)

    Dib Djamel Eddine

    2012-02-01

    Full Text Available In this paper, we study the Doppler effect on a GPS(Global Positioning System on board of an observation satellite that receives information on a carrier wave L1 frequency 1575.42 MHz .We simulated GPS signal acquisition. This allowed us to see the behavior of this type of receiver in AWGN channel (AWGN and we define a method to reduce the Doppler Effect in the tracking loop which is wavelet de-noising technique.

  19. Applying the nominal group technique in an employment relations conflict situation: A case study of a university maintenance section in South Africa

    Directory of Open Access Journals (Sweden)

    Cornelis (Kees S. van der Waal

    2009-04-01

    Full Text Available After a breakdown in employment relations in the maintenance section of a higher education institution, the authors were asked to intervene in order to try and solve the employment relations conflict situation. It was decided to employ the Nominal Group Technique (NGT as a tool in problem identification during conflict in the workplace. An initial investigation of documentation and interviews with prominent individuals in the organisation was carried out. The NGT was then used in four focus group discussions to determine the important issues as seen by staff members. The NGT facilitates the determination of shared perceptions and the ranking of ideas. The NGT was used in diverse groups, necessitating adaptations to the technique. The perceived causes of the conflict were established. The NGT can be used in a conflict situation in the workplace in order to establish the perceived causes of employment relations conflict.

  20. Anticipated uncertainty budgets of PRARETIME and T2L2 techniques as applied to ExTRAS

    Science.gov (United States)

    Thomas, Claudine; Wolf, Peter; Uhrich, Pierre J. M.; Schaefer, W.; Nau, H.; Veillet, Christian

    1995-05-01

    The Experiment on Timing Ranging and Atmospheric Soundings, ExTRAS, was conceived jointly by the European Space Agency, ESA, and the Russian Space Agency, RSA. It is also designated the 'Hydrogen-maser in Space/Meteor-3M project'. The launch of the satellite is scheduled for early 1997. The package, to be flown on board a Russian meteorological satellite includes ultra-stable frequency and time sources, namely two active and auto-tuned hydrogen masers. Communication between the on-board hydrogen masers and the ground station clocks is effected by means of a microwave link using the modified version for time transfer of the Precise Range And Range-rate Equipment, PRARETIME, technique, and an optical link which uses the Time Transfer by Laser Link, T2L2, method. Both the PRARETIME and T2L2 techniques operate in a two-directional mode, which makes it possible to carry out accurate transmissions without precise knowledge of the satellite and station positions. Due to the exceptional quality of the on-board clocks and to the high performance of the communication techniques with the satellite, satellite clock monitoring and ground clocks synchronization are anticipated to be performed with uncertainties below 0.5 ns (1 sigma). Uncertainty budgets and related comments are presented.

  1. Technology Assessment of Dust Suppression Techniques Applied During Structural Demolition. Topical Report August1, 1995 - October 30, 1996

    International Nuclear Information System (INIS)

    Hanford, Fernald, Savannah River, and other sites are currently reviewing technologies that can be implemented to demolish buildings in a cost-effective manner. In order to demolish a structure properly and, at the same time, minimize the amount of dust generated from a given technology, an evaluation must be conducted to choose the most appropriate dust suppression technology given site-specific conditions. Thus, the purpose of this research, which was carried out at the Hemispheric Center for Environmental Technology (HCET) at Florida International University, was to conduct an experimental study of dust aerosol abatement (dust suppression) methods as applied to nuclear D and D. This experimental study targeted the problem of dust suppression during the demolition of nuclear facilities. The resulting data were employed to assist in the development of mathematical correlations that can be applied to predict dust generation during structural demolition

  2. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    Science.gov (United States)

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions. PMID:27128036

  3. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    Science.gov (United States)

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.

  4. Objectively-assessed outcome measures: a translation and cross-cultural adaptation procedure applied to the Chedoke McMaster Arm and Hand Activity Inventory (CAHAI

    Directory of Open Access Journals (Sweden)

    Hahn Sabine

    2010-11-01

    Full Text Available Abstract Background Standardised translation and cross-cultural adaptation (TCCA procedures are vital to describe language translation, cultural adaptation, and to evaluate quality factors of transformed outcome measures. No TCCA procedure for objectively-assessed outcome (OAO measures exists. Furthermore, no official German version of the Canadian Chedoke Arm and Hand Activity Inventory (CAHAI is available. Methods An eight-step for TCCA procedure for OAO was developed (TCCA-OAO based on the existing TCCA procedure for patient-reported outcomes. The TCCA-OAO procedure was applied to develop a German version of the CAHAI (CAHAI-G. Inter-rater reliability of the CAHAI-G was determined through video rating of CAHAI-G. Validity evaluation of the CAHAI-G was assessed using the Chedoke-McMaster Stroke Assessment (CMSA. All ratings were performed by trained, independent raters. In a cross-sectional study, patients were tested within 31 hours after the initial CAHAI-G scoring, for their motor function level using the subscales for arm and hand of the CMSA. Inpatients and outpatients of the occupational therapy department who experienced a cerebrovascular accident or an intracerebral haemorrhage were included. Results Performance of 23 patients (mean age 69.4, SD 12.9; six females; mean time since stroke onset: 1.5 years, SD 2.5 years have been assessed. A high inter-rater reliability was calculated with ICCs for 4 CAHAI-G versions (13, 9, 8, 7 items ranging between r = 0.96 and r = 0.99 (p Conclusions The TCCA-OAO procedure was validated regarding its feasibility and applicability for objectively-assessed outcome measures. The resulting German CAHAI can be used as a valid and reliable assessment for bilateral upper limb performance in ADL in patients after stroke.

  5. Adaptive statistical iterative reconstruction-applied ultra-low-dose CT with radiography- comparable radiation dose: Usefulness for lung nodule detection

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hyun Jung; Chung, Myung Jin; Hwang, Hye Sun; Lee, Kyung Soo [Dept. of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Moon, Jung Won [Dept. of Radiology, Kangbuk Samsung Hospital, Seoul (Korea, Republic of)

    2015-10-15

    To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT.

  6. The adapted yam minisett technique for producing clean seed yams (Dioscorea Rotundata): Agronomic performance and varietal differences under farmer-managed conditions in Nigeria

    OpenAIRE

    Morse, S; McNamara, N.

    2014-01-01

    White yam (Dioscorea rotundata) is a major root crop grown throughout West Africa but one of the major factors that limits its production is the availability of good quality planting material. This paper described the results of farmer-managed demonstration plots established in 2012 and 2013 designed to promote the Adapted Yam Minisett Technique (AYMT) in Nigeria. The AYMT was developed between 2005 and 2008 to produce quality seed yam tubers at a cost that is viable for small-scale farmers. ...

  7. Using adapted budget cost variance techniques to measure the impact of Lean – based on empirical findings in Lean case studies

    DEFF Research Database (Denmark)

    Kristensen, Thomas Borup

    2015-01-01

    . This is needed in Lean as the benefits are often created over multiple periods and not just within one budget period. Traditional cost variance techniques are not able to trace these effects. Moreover, Time-driven ABC is adapted to fit the measurement of Lean improvement outside manufacturing and facilitate......Lean is dominating management philosophy, but the management accounting techniques that best supports this is still not fully understood. Especially how Lean fits traditional budget variance analysis, which is a main theme of every management accounting textbook. I have studied three Scandinavian...... excellent Lean performing companies and their development of budget variance analysis techniques. Based on these empirical findings techniques are presented to calculate cost and cost variances in the Lean companies. First of all, a cost variance is developed to calculate the Lean cost benefits within...

  8. Applying monitoring, verification, and accounting techniques to a real-world, enhanced oil recovery operational CO2 leak

    Science.gov (United States)

    Wimmer, B.T.; Krapac, I.G.; Locke, R.; Iranmanesh, A.

    2011-01-01

    The use of carbon dioxide (CO2) for enhanced oil recovery (EOR) is being tested for oil fields in the Illinois Basin, USA. While this technology has shown promise for improving oil production, it has raised some issues about the safety of CO2 injection and storage. The Midwest Geological Sequestration Consortium (MGSC) organized a Monitoring, Verification, and Accounting (MVA) team to develop and deploy monitoring programs at three EOR sites in Illinois, Indiana, and Kentucky, USA. MVA goals include establishing baseline conditions to evaluate potential impacts from CO2 injection, demonstrating that project activities are protective of human health and the environment, and providing an accurate accounting of stored CO2. This paper focuses on the use of MVA techniques in monitoring a small CO2 leak from a supply line at an EOR facility under real-world conditions. The ability of shallow monitoring techniques to detect and quantify a CO2 leak under real-world conditions has been largely unproven. In July of 2009, a leak in the pipe supplying pressurized CO2 to an injection well was observed at an MGSC EOR site located in west-central Kentucky. Carbon dioxide was escaping from the supply pipe located approximately 1 m underground. The leak was discovered visually by site personnel and injection was halted immediately. At its largest extent, the hole created by the leak was approximately 1.9 m long by 1.7 m wide and 0.7 m deep in the land surface. This circumstance provided an excellent opportunity to evaluate the performance of several monitoring techniques including soil CO2 flux measurements, portable infrared gas analysis, thermal infrared imagery, and aerial hyperspectral imagery. Valuable experience was gained during this effort. Lessons learned included determining 1) hyperspectral imagery was not effective in detecting this relatively small, short-term CO2 leak, 2) even though injection was halted, the leak remained dynamic and presented a safety risk concern

  9. Neutrongraphy technique applied to the narcotics and terrorism enforcement; Neutrongrafia aplicada no combate ao narcotrafico e ao terrorismo

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Ademir X. da; Hacidume, Leo R.; Crispim, Verginia R. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear

    1999-03-01

    Among the several methods of non-destructive essays that may be used for the detection of both drugs and explosives, the ones that utilize nuclear techniques have demonstrated to possess essential qualities for an efficient detection system. These techniques allow the inspection of a large quantity of samples fast, sensibly, specifically and with automatic decision, for they utilize radiation of great power of penetration. This work aims to show the neutron radiography and computed tomography potentiality for the detection of the drugs and explosives even when they are concealed by heavy materials. In the radiographic essays with thermal neutrons, samples of powder cocaine and explosives were inspected, concealed by several materials or not. The samples were irradiated during 30 minutes in the J-9 channel of the Argonauta research reactor of the IEN/CNEN in a neutron flux of 2:5 10{sup 5} n/cm{sup 2}.s. We used two sheets of gadolinium converter with a thickness of 25 {mu}m each one and a Kodak Industrex A5 photographic plaque. A comparative analysis among the tomographic images experimental and simulated obtained by X-ray, fast and thermal neutron is presented. The thermal neutron tomography demonstrate to be the best. (author)

  10. Applying multivariate clustering techniques to health data: the 4 types of healthcare utilization in the Paris metropolitan area.

    Directory of Open Access Journals (Sweden)

    Thomas Lefèvre

    Full Text Available Cost containment policies and the need to satisfy patients' health needs and care expectations provide major challenges to healthcare systems. Identification of homogeneous groups in terms of healthcare utilisation could lead to a better understanding of how to adjust healthcare provision to society and patient needs.This study used data from the third wave of the SIRS cohort study, a representative, population-based, socio-epidemiological study set up in 2005 in the Paris metropolitan area, France. The data were analysed using a cross-sectional design. In 2010, 3000 individuals were interviewed in their homes. Non-conventional multivariate clustering techniques were used to determine homogeneous user groups in data. Multinomial models assessed a wide range of potential associations between user characteristics and their pattern of healthcare utilisation.We identified four distinct patterns of healthcare use. Patterns of consumption and the socio-demographic characteristics of users differed qualitatively and quantitatively between these four profiles. Extensive and intensive use by older, wealthier and unhealthier people contrasted with narrow and parsimonious use by younger, socially deprived people and immigrants. Rare, intermittent use by young healthy men contrasted with regular targeted use by healthy and wealthy women.The use of an original technique of massive multivariate analysis allowed us to characterise different types of healthcare users, both in terms of resource utilisation and socio-demographic variables. This method would merit replication in different populations and healthcare systems.

  11. Determination of photocarrier density under continuous photoirradiation using spectroscopic techniques as applied to polymer: Fullerene blend films

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Katsuichi, E-mail: kkane@sci.osaka-cu.ac.jp; Nakatani, Hitomi; Domoto, Shinya [Department of Physics, Osaka City University, 3-3-138 Sugimoto, Sumiyoshi-ku, Osaka 558-8585 (Japan)

    2014-10-28

    We propose a method to determine the density of photocarrier under continuous photoirradiation in conjugated polymers using spectroscopic signals obtained by photoinduced absorption (PIA) measurements. The bleaching signals in the PIA measurements of polymer films and the steady-state absorption signals of oxidized polymer solution are employed to determine the photocarrier density. The method is applied to photocarriers of poly (3-hexylthiophene) (P3HT) in a blended film consisting of P3HT and [6,6]-phenyl C61 butyric acid methyl ester (PCBM). The photocarrier density under continuous photoirradiation of 580 mW/cm{sup 2} is determined to be 3.5 × 10{sup 16 }cm{sup −3}. Using a trend of the carrier density increasing in proportion to the square root of photo-excitation intensity, we provide a general formula to estimate the photocarrier density under simulated 1 sun solar irradiation for the P3HT: PCBM film of an arbitrary thickness. We emphasize that the method proposed in this study enables an estimate of carrier density without measuring a current and can be applied to films with no electrodes as well as to devices.

  12. Enhanced performance of CdS/CdTe thin-film devices through temperature profiling techniques applied to close-spaced sublimation deposition

    Energy Technology Data Exchange (ETDEWEB)

    Xiaonan Li; Sheldon, P.; Moutinho, H.; Matson, R. [National Renewable Energy Lab., Golden, CO (United States)

    1996-05-01

    The authors describe a methodology developed and applied to the close-spaced sublimation technique for thin-film CdTe deposition. The developed temperature profiles consisted of three discrete temperature segments, which the authors called the nucleation, plugging, and annealing temperatures. They have demonstrated that these temperature profiles can be used to grow large-grain material, plug pinholes, and improve CdS/CdTe photovoltaic device performance by about 15%. The improved material and device properties have been obtained while maintaining deposition temperatures compatible with commercially available substrates. This temperature profiling technique can be easily applied to a manufacturing environment by adjusting the temperature as a function of substrate position instead of time.

  13. Restoration of badlands through applying bio-engineering techniques in active gully systems: Evidence from the Ecuadorian Andes

    Science.gov (United States)

    Borja, P.; Vanacker, V.; Alvarado, D.; Govers, G.

    2012-04-01

    A better insight in the processes controlling sediment generation, transport and deposition in badlands is necessary to enhance restoration of degraded soils through eco-engineering techniques. In this study, we evaluate the effect of different bio-engineering measures on soil and slope stability. Five micro-catchments (of 0.2 to 5 ha) were selected within a 3 km2 area in the lower part of the Loreto catchment (Southern Ecuadorian Andes). The micro-catchments differ only by land cover and degree of implementation of soil and water conservation measures. Bio-engineering techniques were used to construct dikes made of fascines of wooden sticks and earth-filled tires in active gully beds, where they are most efficient to reduce water and sediment transport. The experimental design consists of three micro-catchments within highly degraded lands: (DI) micro-catchment with bio-engineering measures concentrated in the active gully beds, (DF) with reforestation of Eucalyptus trees, and (DT) reference situation without any conservation measures. Two micro-catchments were monitored in agricultural lands with (AI) and without (AT) bio-engineering measures in the active gully beds. All catchments were equipped with San Dimas flumes to measure water flow, and sediment traps to monitor sediment export. In the (active) gully beds, various parameters related to gully stability (soil water content, bed elevation, vegetation cover, sedimentation/erosion) were monitored at weekly intervals. First results show that bio-engineering techniques are efficient to stabilize active gully beds through a reduction of the rapid concentration of excess rainfall and the sediment production and transfer. Fascines made of wooden sticks are far more efficient than earth-filled tires. Sediment deposition behind dikes is strongly dependent on precedent rainfall events, and the slope and vegetation cover of the gully floor. The sediment deposited facilitates colonization of the gully floor by native

  14. Improved general physical fitness of young swimmers by applying in the training process of endogenous hypoxic breathing techniques

    Directory of Open Access Journals (Sweden)

    Furman Y.M.

    2014-06-01

    Full Text Available Purpose : to examine the effect of general physical preparedness of young swimmers in the body artificially created state hypercapnic normobaric hypoxia. Material : the study involved 21 swimmer aged 13-14 years with sports qualifications at third and second sports categories. Results : the original method of working with young swimmers. Studies were conducted for 16 weeks a year preparatory period macrocycle. The average value of the index on the results of general endurance races 800m improved by 2.80 %. 8.24 % increased speed- strength endurance and 18.77 % increased dynamic strength endurance. During the period of formative experiment performance speed, agility, static endurance, flexibility and explosive strength athletes first experimental group was not significantly changed. Conclusions : it was found that the use of the proposed technique provides statistically significant increase in overall endurance, speed strength endurance and dynamic strength endurance.

  15. A Non-Invasive Thermal Drift Compensation Technique Applied to a Spin-Valve Magnetoresistive Current Sensor

    Science.gov (United States)

    Moreno, Jaime Sánchez; Muñoz, Diego Ramírez; Cardoso, Susana; Berga, Silvia Casans; Antón, Asunción Edith Navarro; de Freitas, Paulo Jorge Peixeiro

    2011-01-01

    A compensation method for the sensitivity drift of a magnetoresistive (MR) Wheatstone bridge current sensor is proposed. The technique was carried out by placing a ruthenium temperature sensor and the MR sensor to be compensated inside a generalized impedance converter circuit (GIC). No internal modification of the sensor bridge arms is required so that the circuit is capable of compensating practical industrial sensors. The method is based on the temperature modulation of the current supplied to the bridge, which improves previous solutions based on constant current compensation. Experimental results are shown using a microfabricated spin-valve MR current sensor. The temperature compensation has been solved in the interval from 0 °C to 70 °C measuring currents from −10 A to +10 A. PMID:22163748

  16. A non-invasive thermal drift compensation technique applied to a spin-valve magnetoresistive current sensor.

    Science.gov (United States)

    Sánchez Moreno, Jaime; Ramírez Muñoz, Diego; Cardoso, Susana; Casans Berga, Silvia; Navarro Antón, Asunción Edith; Peixeiro de Freitas, Paulo Jorge

    2011-01-01

    A compensation method for the sensitivity drift of a magnetoresistive (MR) Wheatstone bridge current sensor is proposed. The technique was carried out by placing a ruthenium temperature sensor and the MR sensor to be compensated inside a generalized impedance converter circuit (GIC). No internal modification of the sensor bridge arms is required so that the circuit is capable of compensating practical industrial sensors. The method is based on the temperature modulation of the current supplied to the bridge, which improves previous solutions based on constant current compensation. Experimental results are shown using a microfabricated spin-valve MR current sensor. The temperature compensation has been solved in the interval from 0 °C to 70 °C measuring currents from -10 A to +10 A. PMID:22163748

  17. A Non-Invasive Thermal Drift Compensation Technique Applied to a Spin-Valve Magnetoresistive Current Sensor

    Directory of Open Access Journals (Sweden)

    Paulo Jorge Peixeiro de Freitas

    2011-02-01

    Full Text Available A compensation method for the sensitivity drift of a magnetoresistive (MR Wheatstone bridge current sensor is proposed. The technique was carried out by placing a ruthenium temperature sensor and the MR sensor to be compensated inside a generalized impedance converter circuit (GIC. No internal modification of the sensor bridge arms is required so that the circuit is capable of compensating practical industrial sensors. The method is based on the temperature modulation of the current supplied to the bridge, which improves previous solutions based on constant current compensation. Experimental results are shown using a microfabricated spin-valve MR current sensor. The temperature compensation has been solved in the interval from 0 °C to 70 °C measuring currents from −10 A to +10 A.

  18. Systematic study of the effects of mass and time scaling techniques applied in numerical rock mechanics simulations

    Science.gov (United States)

    Heinze, Thomas; Jansen, Gunnar; Galvan, Boris; Miller, Stephen A.

    2016-08-01

    Numerical modeling is a well established tool in rock mechanics studies investigating a wide range of problems. Implicit methods for solving linear equations have the advantage of being unconditionally stable, while explicit methods, although limited by the time step, are often used because of their limited memory demand, their scalability in parallel computing, and simple implementation of complex boundary conditions. In numerical modeling of explicit elastoplastic dynamics where the time step is limited by the material density, mass scaling techniques can be used to overcome this limit and significantly reduce computation time. While often used, the effect of mass and time scaling and how it may influence the numerical results is rarely-mentioned in publications, and choosing the right scaling technique is typically performed by trial and error. To our knowledge, no systematic studies have addressed how mass scaling might affect the numerical results. In this paper, we present results from an extensive and systematic study of the influence of mass and time scaling on the behavior of a variety of rock-mechanical models. We employ a finite difference scheme to model uniaxial and biaxial compression experiments using different mass and time scaling factors, and with physical models of increasing complexity up to a cohesion-weakening frictional-strengthening model (CWFS). We also introduce a normalized energy ratio to assist analyzing mass scaling effects. We find the tested models to be less sensitive to time scaling than to mass scaling, so mass scaling has higher potential for decreasing computational costs. However, we also demonstrate that mass scaling may lead to quantitatively wrong results, so care must be taken in interpreting stress values when mass scaling is used in complicated rock mechanics simulations. Mass scaling significantly influences the stress-strain response of numerical rocks because mass scaling acts as an artificial hardening agent on rock

  19. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  20. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    Science.gov (United States)

    Dyar, M. D.; Carmosino, M. L.; Breves, E. A.; Ozanne, M. V.; Clegg, S. M.; Wiens, R. C.

    2012-04-01

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the