WorldWideScience

Sample records for neuromath advanced methods

  1. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  2. Advanced differential quadrature methods

    CERN Document Server

    Zong, Zhi

    2009-01-01

    Modern Tools to Perform Numerical DifferentiationThe original direct differential quadrature (DQ) method has been known to fail for problems with strong nonlinearity and material discontinuity as well as for problems involving singularity, irregularity, and multiple scales. But now researchers in applied mathematics, computational mechanics, and engineering have developed a range of innovative DQ-based methods to overcome these shortcomings. Advanced Differential Quadrature Methods explores new DQ methods and uses these methods to solve problems beyond the capabilities of the direct DQ method.After a basic introduction to the direct DQ method, the book presents a number of DQ methods, including complex DQ, triangular DQ, multi-scale DQ, variable order DQ, multi-domain DQ, and localized DQ. It also provides a mathematical compendium that summarizes Gauss elimination, the Runge-Kutta method, complex analysis, and more. The final chapter contains three codes written in the FORTRAN language, enabling readers to q...

  3. Advances in iterative methods

    International Nuclear Information System (INIS)

    Beauwens, B.; Arkuszewski, J.; Boryszewicz, M.

    1981-01-01

    Results obtained in the field of linear iterative methods within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations are summarized. The general convergence theory of linear iterative methods is essentially based on the properties of nonnegative operators on ordered normed spaces. The following aspects of this theory have been improved: new comparison theorems for regular splittings, generalization of the notions of M- and H-matrices, new interpretations of classical convergence theorems for positive-definite operators. The estimation of asymptotic convergence rates was developed with two purposes: the analysis of model problems and the optimization of relaxation parameters. In the framework of factorization iterative methods, model problem analysis is needed to investigate whether the increased computational complexity of higher-order methods does not offset their increased asymptotic convergence rates, as well as to appreciate the effect of standard relaxation techniques (polynomial relaxation). On the other hand, the optimal use of factorization iterative methods requires the development of adequate relaxation techniques and their optimization. The relative performances of a few possibilities have been explored for model problems. Presently, the best results have been obtained with optimal diagonal-Chebyshev relaxation

  4. Advances in Biosensing Methods

    Directory of Open Access Journals (Sweden)

    Reema Taneja

    2007-02-01

    Full Text Available A fractal analysis is presented for the binding and dissociation (if applicable kinetics of analyte-receptor reactions occurring on biosensor surfaces. The applications of the biosensors have appeared in the recent literature. The examples provided together provide the reader with a perspective of the advances in biosensors that are being used to detect analytes of interest. This should also stimulate interest in applying biosensors to other areas of application. The fractal analysis limits the evaluation of the rate constants for binding and dissociation (if applicable for the analyte-receptor reactions occurring in biosensor surfaces. The fractal dimension provides a quantitative measure of the degree of heterogeneity on the biosensor surface. Predictive relations are presented that relate the binding co-efficient with the degree of heterogeneity or the fractal dimension on the biosensor surface

  5. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  6. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  7. Advanced methods of fatigue assessment

    CERN Document Server

    Radaj, Dieter

    2013-01-01

    The book in hand presents advanced methods of brittle fracture and fatigue assessment. The Neuber concept of fictitious notch rounding is enhanced with regard to theory and application. The stress intensity factor concept for cracks is extended to pointed and rounded corner notches as well as to locally elastic-plastic material behaviour. The averaged strain energy density within a circular sector volume around the notch tip is shown to be suitable for strength-assessments. Finally, the various implications of cyclic plasticity on fatigue crack growth are explained with emphasis being laid on the DJ-integral approach.   This book continues the expositions of the authors’ well known reference work in German language ‘Ermüdungsfestigkeit – Grundlagen für Ingenieure’ (Fatigue strength – fundamentals for engineers).

  8. Advanced construction methods in ACR

    International Nuclear Information System (INIS)

    Elgohary, M.; Choy, E.; Yu, S.K.W.

    2002-01-01

    The ACR - Advanced CANDU Reactor, developed by Atomic Energy of Canada Limited (AECL), is designed with constructability considerations as a major requirement during all project phases from the concept design stage to the detail design stage. This necessitated a much more comprehensive approach in including constructability considerations in the design to ensure that the construction duration is met. For the ACR-700, a project schedule of 48 months has been developed for the nth replicated unit with a 36 month construction period duration from First Concrete to Fuel Load. An overall construction strategy that builds on the success of the construction methods that are proven in the construction of the Qinshan CANDU 6 project has been developed for the ACR. The overall construction strategy comprises the 'Open Top' construction technique using a Very Heavy Lift crane, parallel construction activities, with extensive modularization and prefabrication. In addition, significant applications of up to date construction technology will be implemented, e.g. large volume concrete pours, prefabricated rebar, use of climbing forms, composite structures, prefabricated permanent formwork, automatic welding, and utilization of the latest electronic technology tools such as 3D CADDs modelling yields a very high quality, clash free product to allow construction to be completed 'right the first time' and eliminates rework. Integration of 3D CADDs models and scheduling tools such as Primavera has allowed development of actual construction sequences and an iterative approach to schedule verification and improvement. Modularization and prefabrication are major features of the ACR design in order to achieve the project schedule. For the reactor building approximately 80% of the volume will be installed as modules or prefabricated assembles. This ensures critical path activities are achieved. This paper examines the advanced construction methods implemented in the design in order to

  9. Advanced accelerator methods: The cyclotrino

    International Nuclear Information System (INIS)

    Welch, J.J.; Bertsche, K.J.; Friedman, P.G.; Morris, D.E.; Muller, R.A.

    1987-04-01

    Several new and unusual, advanced techniques in the small cyclotron are described. The cyclotron is run at low energy, using negative ions and at high harmonics. Electrostatic focusing is used exclusively. The ion source and injection system is in the center, which unfortunately does not provide enough current, but the new system design should solve this problem. An electrostatic extractor that runs at low voltage, under 5 kV, and a microchannel plate detector which is able to discriminate low energy ions from the 14 C are used. The resolution is sufficient for 14 C dating and a higher intensity source should allow dating of a milligram size sample of 30,000 year old material with less than 10% uncertainty

  10. Advanced Fine Particulate Characterization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Steven Benson; Lingbu Kong; Alexander Azenkeng; Jason Laumb; Robert Jensen; Edwin Olson; Jill MacKenzie; A.M. Rokanuzzaman

    2007-01-31

    The characterization and control of emissions from combustion sources are of significant importance in improving local and regional air quality. Such emissions include fine particulate matter, organic carbon compounds, and NO{sub x} and SO{sub 2} gases, along with mercury and other toxic metals. This project involved four activities including Further Development of Analytical Techniques for PM{sub 10} and PM{sub 2.5} Characterization and Source Apportionment and Management, Organic Carbonaceous Particulate and Metal Speciation for Source Apportionment Studies, Quantum Modeling, and High-Potassium Carbon Production with Biomass-Coal Blending. The key accomplishments included the development of improved automated methods to characterize the inorganic and organic components particulate matter. The methods involved the use of scanning electron microscopy and x-ray microanalysis for the inorganic fraction and a combination of extractive methods combined with near-edge x-ray absorption fine structure to characterize the organic fraction. These methods have direction application for source apportionment studies of PM because they provide detailed inorganic analysis along with total organic and elemental carbon (OC/EC) quantification. Quantum modeling using density functional theory (DFT) calculations was used to further elucidate a recently developed mechanistic model for mercury speciation in coal combustion systems and interactions on activated carbon. Reaction energies, enthalpies, free energies and binding energies of Hg species to the prototype molecules were derived from the data obtained in these calculations. Bimolecular rate constants for the various elementary steps in the mechanism have been estimated using the hard-sphere collision theory approximation, and the results seem to indicate that extremely fast kinetics could be involved in these surface reactions. Activated carbon was produced from a blend of lignite coal from the Center Mine in North Dakota and

  11. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  12. Development of advanced MCR task analysis methods

    International Nuclear Information System (INIS)

    Na, J. C.; Park, J. H.; Lee, S. K.; Kim, J. K.; Kim, E. S.; Cho, S. B.; Kang, J. S.

    2008-07-01

    This report describes task analysis methodology for advanced HSI designs. Task analyses was performed by using procedure-based hierarchical task analysis and task decomposition methods. The results from the task analysis were recorded in a database. Using the TA results, we developed static prototype of advanced HSI and human factors engineering verification and validation methods for an evaluation of the prototype. In addition to the procedure-based task analysis methods, workload estimation based on the analysis of task performance time and analyses for the design of information structure and interaction structures will be necessary

  13. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  14. Advanced analysis methods in particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  15. Advanced methods in diagnosis and therapy

    International Nuclear Information System (INIS)

    1987-01-01

    This important meeting covers the following topics: use and optimization of monoclonal antibobies in oncology: - Tumor markers: Clinical follow-up of patients through tumor marker serum determinations. - Cancer and medical imaging: The use of monoclonal antibodies in immunoscintigraphy. - Immunoradiotherapy: Monoclonal antibodies as therapeutic vectors. Advanced methods in diagnosis: - Contribution of monoclonal antibodies in modern immunochemistry (RIA, EIA). - Interest of monoclonal antibody in immunohistochemical pathology diagnosis. - In vitro diagnosis future prospects: with receptors and oncogenes. - Immunofluoroassay: a new sensitive immunoanalytical procedure with broad applications. Recent advances in brachitherapy: - Interest of computer processing. Blood products irradiation: - Interest in transfusion and bone marrow transplantations [fr

  16. Mathematics for natural scientists II advanced methods

    CERN Document Server

    Kantorovich, Lev

    2016-01-01

    This book covers the advanced mathematical techniques useful for physics and engineering students, presented in a form accessible to physics students, avoiding precise mathematical jargon and laborious proofs. Instead, all proofs are given in a simplified form that is clear and convincing for a physicist. Examples, where appropriate, are given from physics contexts. Both solved and unsolved problems are provided in each chapter. Mathematics for Natural Scientists II: Advanced Methods is the second of two volumes. It follows the first volume on Fundamentals and Basics.

  17. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  18. Advanced statistical methods in data science

    CERN Document Server

    Chen, Jiahua; Lu, Xuewen; Yi, Grace; Yu, Hao

    2016-01-01

    This book gathers invited presentations from the 2nd Symposium of the ICSA- CANADA Chapter held at the University of Calgary from August 4-6, 2015. The aim of this Symposium was to promote advanced statistical methods in big-data sciences and to allow researchers to exchange ideas on statistics and data science and to embraces the challenges and opportunities of statistics and data science in the modern world. It addresses diverse themes in advanced statistical analysis in big-data sciences, including methods for administrative data analysis, survival data analysis, missing data analysis, high-dimensional and genetic data analysis, longitudinal and functional data analysis, the design and analysis of studies with response-dependent and multi-phase designs, time series and robust statistics, statistical inference based on likelihood, empirical likelihood and estimating functions. The editorial group selected 14 high-quality presentations from this successful symposium and invited the presenters to prepare a fu...

  19. Editorial: Latest methods and advances in biotechnology.

    Science.gov (United States)

    Lee, Sang Yup; Jungbauer, Alois

    2014-01-01

    The latest "Biotech Methods and Advances" special issue of Biotechnology Journal continues the BTJ tradition of featuring the latest breakthroughs in biotechnology. The special issue is edited by our Editors-in-Chief, Prof. Sang Yup Lee and Prof. Alois Jungbauer and covers a wide array of topics in biotechnology, including the perennial favorite workhorses of the biotech industry, Chinese hamster ovary (CHO) cell and Escherichia coli. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Advanced Analysis Methods in High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Pushpalatha C. Bhat

    2001-10-03

    During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.

  1. Advances in Packaging Methods, Processes and Systems

    Directory of Open Access Journals (Sweden)

    Nitaigour Premchand Mahalik

    2014-10-01

    Full Text Available The food processing and packaging industry is becoming a multi-trillion dollar global business. The reason is that the recent increase in incomes in traditionally less economically developed countries has led to a rise in standards of living that includes a significantly higher consumption of packaged foods. As a result, food safety guidelines have been more stringent than ever. At the same time, the number of research and educational institutions—that is, the number of potential researchers and stakeholders—has increased in the recent past. This paper reviews recent developments in food processing and packaging (FPP, keeping in view the aforementioned advancements and bearing in mind that FPP is an interdisciplinary area in that materials, safety, systems, regulation, and supply chains play vital roles. In particular, the review covers processing and packaging principles, standards, interfaces, techniques, methods, and state-of-the-art technologies that are currently in use or in development. Recent advances such as smart packaging, non-destructive inspection methods, printing techniques, application of robotics and machineries, automation architecture, software systems and interfaces are reviewed.

  2. Advanced continuous cultivation methods for systems microbiology.

    Science.gov (United States)

    Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo

    2015-09-01

    Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.

  3. Advancing UAS methods for monitoring coastal environments

    Science.gov (United States)

    Ridge, J.; Seymour, A.; Rodriguez, A. B.; Dale, J.; Newton, E.; Johnston, D. W.

    2017-12-01

    Utilizing fixed-wing Unmanned Aircraft Systems (UAS), we are working to improve coastal monitoring by increasing the accuracy, precision, temporal resolution, and spatial coverage of habitat distribution maps. Generally, multirotor aircraft are preferred for precision imaging, but recent advances in fixed-wing technology have greatly increased their capabilities and application for fine-scale (decimeter-centimeter) measurements. Present mapping methods employed by North Carolina coastal managers involve expensive, time consuming and localized observation of coastal environments, which often lack the necessary frequency to make timely management decisions. For example, it has taken several decades to fully map oyster reefs along the NC coast, making it nearly impossible to track trends in oyster reef populations responding to harvesting pressure and water quality degradation. It is difficult for the state to employ manned flights for collecting aerial imagery to monitor intertidal oyster reefs, because flights are usually conducted after seasonal increases in turbidity. In addition, post-storm monitoring of coastal erosion from manned platforms is often conducted days after the event and collects oblique aerial photographs which are difficult to use for accurately measuring change. Here, we describe how fixed wing UAS and standard RGB sensors can be used to rapidly quantify and assess critical coastal habitats (e.g., barrier islands, oyster reefs, etc.), providing for increased temporal frequency to isolate long-term and event-driven (storms, harvesting) impacts. Furthermore, drone-based approaches can accurately image intertidal habitats as well as resolve information such as vegetation density and bathymetry from shallow submerged areas. We obtain UAS imagery of a barrier island and oyster reefs under ideal conditions (low tide, turbidity, and sun angle) to create high resolution (cm scale) maps and digital elevation models to assess habitat condition

  4. Damped time advance methods for particles and EM fields

    International Nuclear Information System (INIS)

    Friedman, A.; Ambrosiano, J.J.; Boyd, J.K.; Brandon, S.T.; Nielsen, D.E. Jr.; Rambo, P.W.

    1990-01-01

    Recent developments in the application of damped time advance methods to plasma simulations include the synthesis of implicit and explicit ''adjustably damped'' second order accurate methods for particle motion and electromagnetic field propagation. This paper discusses this method

  5. Advanced Aqueous Phase Catalyst Development using Combinatorial Methods, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Combinatorial methods are proposed to develop advanced Aqueous Oxidation Catalysts (AOCs) with the capability to mineralize organic contaminants present in effluents...

  6. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  7. Advanced Methods of Biomedical Signal Processing

    CERN Document Server

    Cerutti, Sergio

    2011-01-01

    This book grew out of the IEEE-EMBS Summer Schools on Biomedical Signal Processing, which have been held annually since 2002 to provide the participants state-of-the-art knowledge on emerging areas in biomedical engineering. Prominent experts in the areas of biomedical signal processing, biomedical data treatment, medicine, signal processing, system biology, and applied physiology introduce novel techniques and algorithms as well as their clinical or physiological applications. The book provides an overview of a compelling group of advanced biomedical signal processing techniques, such as mult

  8. Advanced Method of the Elastomagnetic Sensors Calibration

    Directory of Open Access Journals (Sweden)

    Mikulas Prascak

    2004-01-01

    Full Text Available Elastomagnetic method (EM method is a highly sensitive non-contact evaluation method for measuring tensile and compressive stress in steel. The latest development of measuring devices and EM sensors has shown that the thermomagnetic phenomenon has a stron influence on th accuracy during the EM sensor calibration. To eliminate the influence of this effect a two dimensional regression method is presented.

  9. Recent advances in boundary element methods

    CERN Document Server

    Manolis, GD

    2009-01-01

    Addresses the needs of the computational mechanics research community in terms of information on boundary integral equation-based methods and techniques applied to a variety of fields. This book collects both original and review articles on contemporary Boundary Element Methods (BEM) as well as on the Mesh Reduction Methods (MRM).

  10. Advanced methods in teaching reactor physics

    International Nuclear Information System (INIS)

    Snoj, Luka; Kromar, Marjan; Zerovnik, Gasper; Ravnik, Matjaz

    2011-01-01

    Modern computer codes allow detailed neutron transport calculations. In combination with advanced 3D visualization software capable of treating large amounts of data in real time they form a powerful tool that can be used as a convenient modern educational tool for (nuclear power plant) operators, nuclear engineers, students and specialists involved in reactor operation and design. Visualization is applicable not only in education and training, but also as a tool for fuel management, core analysis and irradiation planning. The paper treats the visualization of neutron transport in different moderators, neutron flux and power distributions in two nuclear reactors (TRIGA type research reactor and typical PWR). The distributions are calculated with MCNP and CORD-2 computer codes and presented using Amira software.

  11. Advanced methods in teaching reactor physics

    Energy Technology Data Exchange (ETDEWEB)

    Snoj, Luka, E-mail: luka.snoj@ijs.s [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kromar, Marjan, E-mail: marjan.kromar@ijs.s [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Zerovnik, Gasper, E-mail: gasper.zerovnik@ijs.s [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Ravnik, Matjaz [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2011-04-15

    Modern computer codes allow detailed neutron transport calculations. In combination with advanced 3D visualization software capable of treating large amounts of data in real time they form a powerful tool that can be used as a convenient modern educational tool for (nuclear power plant) operators, nuclear engineers, students and specialists involved in reactor operation and design. Visualization is applicable not only in education and training, but also as a tool for fuel management, core analysis and irradiation planning. The paper treats the visualization of neutron transport in different moderators, neutron flux and power distributions in two nuclear reactors (TRIGA type research reactor and typical PWR). The distributions are calculated with MCNP and CORD-2 computer codes and presented using Amira software.

  12. Advances in iterative methods for nonlinear equations

    CERN Document Server

    Busquier, Sonia

    2016-01-01

    This book focuses on the approximation of nonlinear equations using iterative methods. Nine contributions are presented on the construction and analysis of these methods, the coverage encompassing convergence, efficiency, robustness, dynamics, and applications. Many problems are stated in the form of nonlinear equations, using mathematical modeling. In particular, a wide range of problems in Applied Mathematics and in Engineering can be solved by finding the solutions to these equations. The book reveals the importance of studying convergence aspects in iterative methods and shows that selection of the most efficient and robust iterative method for a given problem is crucial to guaranteeing a good approximation. A number of sample criteria for selecting the optimal method are presented, including those regarding the order of convergence, the computational cost, and the stability, including the dynamics. This book will appeal to researchers whose field of interest is related to nonlinear problems and equations...

  13. Advanced finite element method in structural engineering

    CERN Document Server

    Long, Yu-Qiu; Long, Zhi-Fei

    2009-01-01

    This book systematically introduces the research work on the Finite Element Method completed over the past 25 years. Original theoretical achievements and their applications in the fields of structural engineering and computational mechanics are discussed.

  14. Advanced repair methods for enhanced reactor safety

    International Nuclear Information System (INIS)

    Kornfeldt, H.

    1993-01-01

    A few innovative concepts are described of the ABB Atom Service Division for repair and mitigation techniques for primary systems in nuclear power plants. The concepts are based on Shape Memory Alloy (SMA) technology. A basic feature of all methods is that welding and component replacement is being avoided and the radiation dose imposed on maintenance personnel reduced. The SMA-based repair methods give plant operators new ways to meet increased safety standards and rising maintenance costs. (Z.S.) 4 figs

  15. Advanced verification methods for OVI security ink

    Science.gov (United States)

    Coombs, Paul G.; McCaffery, Shaun F.; Markantes, Tom

    2006-02-01

    OVI security ink +, incorporating OVP security pigment* microflakes, enjoys a history of effective document protection. This security feature provides not only first-line recognition by the person on the street, but also facilitates machine-readability. This paper explores the evolution of OVI reader technology from proof-of-concept to miniaturization. Three different instruments have been built to advance the technology of OVI machine verification. A bench-top unit has been constructed which allows users to automatically verify a multitude of different banknotes and OVI images. In addition, high speed modules were fabricated and tested in a state of the art banknote sorting machine. Both units demonstrate the ability of modern optical components to illuminate and collect light reflected from the interference platelets within OVI ink. Electronic hardware and software convert and process the optical information in milliseconds to accurately determine the authenticity of the security feature. Most recently, OVI ink verification hardware has been miniaturized and simplified providing yet another platform for counterfeit protection. These latest devices provide a tool for store clerks and bank tellers to unambiguously determine the validity of banknotes in the time period it takes the cash drawer to be opened.

  16. Advanced Computational Methods in Bio-Mechanics.

    Science.gov (United States)

    Al Qahtani, Waleed M S; El-Anwar, Mohamed I

    2018-04-15

    A novel partnership between surgeons and machines, made possible by advances in computing and engineering technology, could overcome many of the limitations of traditional surgery. By extending surgeons' ability to plan and carry out surgical interventions more accurately and with fewer traumas, computer-integrated surgery (CIS) systems could help to improve clinical outcomes and the efficiency of healthcare delivery. CIS systems could have a similar impact on surgery to that long since realised in computer-integrated manufacturing. Mathematical modelling and computer simulation have proved tremendously successful in engineering. Computational mechanics has enabled technological developments in virtually every area of our lives. One of the greatest challenges for mechanists is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. Biomechanics has significant potential for applications in orthopaedic industry, and the performance arts since skills needed for these activities are visibly related to the human musculoskeletal and nervous systems. Although biomechanics is widely used nowadays in the orthopaedic industry to design orthopaedic implants for human joints, dental parts, external fixations and other medical purposes, numerous researches funded by billions of dollars are still running to build a new future for sports and human healthcare in what is called biomechanics era.

  17. Core design methods for advanced LMFBRs

    International Nuclear Information System (INIS)

    Chandler, J.C.; Marr, D.R.; McCurry, D.C.; Cantley, D.A.

    1977-05-01

    The multidiscipline approach to advanced LMFBR core design requires an iterative design procedure to obtain a closely-coupled design. HEDL's philosophy requires that the designs should be coupled to the extent that the design limiting fuel pin, the design limiting duct and the core reactivity lifetime should all be equal and should equal the fuel residence time. The design procedure consists of an iterative loop involving three stages of the design sequence. Stage 1 consists of general mechanical design and reactor physics scoping calculations to arrive at an initial core layout. Stage 2 consists of detailed reactor physics calculations for the core configuration arrived at in Stage 1. Based upon the detailed reactor physics results, a decision is made either to alter the design (Stage 1) or go to Stage 3. Stage 3 consists of core orificing and detailed component mechanical design calculations. At this point, an assessment is made regarding design adequacy. If the design is inadequate the entire procedure is repeated until the design is acceptable

  18. Recent advances in coupled-cluster methods

    CERN Document Server

    Bartlett, Rodney J

    1997-01-01

    Today, coupled-cluster (CC) theory has emerged as the most accurate, widely applicable approach for the correlation problem in molecules. Furthermore, the correct scaling of the energy and wavefunction with size (i.e. extensivity) recommends it for studies of polymers and crystals as well as molecules. CC methods have also paid dividends for nuclei, and for certain strongly correlated systems of interest in field theory.In order for CC methods to have achieved this distinction, it has been necessary to formulate new, theoretical approaches for the treatment of a variety of essential quantities

  19. Advanced method for making vitreous waste forms

    International Nuclear Information System (INIS)

    Pope, J.M.; Harrison, D.E.

    1980-01-01

    A process is described for making waste glass that circumvents the problems of dissolving nuclear waste in molten glass at high temperatures. Because the reactive mixing process is independent of the inherent viscosity of the melt, any glass composition can be prepared with equal facility. Separation of the mixing and melting operations permits novel glass fabrication methods to be employed

  20. Advanced Testing Method for Ground Thermal Conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaobing [ORNL; Clemenzi, Rick [Geothermal Design Center Inc.; Liu, Su [University of Tennessee (UT)

    2017-04-01

    A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce the cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.

  1. Advance of core design method for ATR

    International Nuclear Information System (INIS)

    Maeda, Seiichirou; Ihara, Toshiteru; Iijima, Takashi; Seino, Hideaki; Kobayashi, Tetsurou; Takeuchi, Michio; Sugawara, Satoru; Matsumoto, Mitsuo.

    1995-01-01

    Core characteristics of ATR demonstration plant has been revised such as increasing the fuel burnup and the channel power, which is achieved by changing the number of fuel rod per fuel assembly from 28 to 36. The research and development concerning the core design method for ATR have been continued. The calculational errors of core analysis code have been evaluated using the operational data of FUGEN and the full scale simulated test results in DCA (Deuterium Critical Assembly) and HTL (Heat Transfer Loop) at O-arai engineering center. It is confirmed that the calculational error of power distribution is smaller than the design value of ATR demonstration plant. Critical heat flux correlation curve for 36 fuel rod cluster has been developed and the probability evaluation method based on its curve, which is more rational to evaluate the fuel dryout, has been adopted. (author)

  2. An advanced method of heterogeneous reactor theory

    International Nuclear Information System (INIS)

    Kochurov, B.P.

    1994-08-01

    Recent approaches to heterogeneous reactor theory for numerical applications were presented in the course of 8 lectures given in JAERI. The limitations of initial theory known after the First Conference on Peacefull Uses of Atomic Energy held in Geneva in 1955 as Galanine-Feinberg heterogeneous theory:-matrix from of equations, -lack of consistent theory for heterogeneous parameters for reactor cell, -were overcome by a transformation of heterogeneous reactor equations to a difference form and by a development of a consistent theory for the characteristics of a reactor cell based on detailed space-energy calculations. General few group (G-number of groups) heterogeneous reactor equations in dipole approximation are formulated with the extension of two-dimensional problem to three-dimensions by finite Furie expansion of axial dependence of neutron fluxes. A transformation of initial matrix reactor equations to a difference form is presented. The methods for calculation of heterogeneous reactor cell characteristics giving the relation between vector-flux and vector-current on a cell boundary are based on a set of detailed space-energy neutron flux distribution calculations with zero current across cell boundary and G calculations with linearly independent currents across the cell boundary. The equations for reaction rate matrices are formulated. Specific methods were developed for description of neutron migration in axial and radial directions. The methods for resonance level's approach for numerous high-energy resonances. On the basis of these approaches the theory, methods and computer codes were developed for 3D space-time react or problems including simulation of slow processes with fuel burn-up, control rod movements, Xe poisoning and fast transients depending on prompt and delayed neutrons. As a result reactors with several thousands of channels having non-uniform axial structure can be feasibly treated. (author)

  3. Advances on geometric flux optical design method

    Science.gov (United States)

    García-Botella, Ángel; Fernández-Balbuena, Antonio Álvarez; Vázquez, Daniel

    2017-09-01

    Nonimaging optics is focused on the study of methods to design concentrators or illuminators systems. It can be included in the area of photometry and radiometry and it is governed by the laws of geometrical optics. The field vector method, which starts with the definition of the irradiance vector E, is one of the techniques used in nonimaging optics. Called "Geometrical flux vector" it has provide ideal designs. The main property of this model is, its ability to estimate how radiant energy is transferred by the optical system, from the concepts of field line, flux tube and pseudopotential surface, overcoming traditional raytrace methods. Nevertheless this model has been developed only at an academic level, where characteristic optical parameters are ideal not real and the studied geometries are simple. The main objective of the present paper is the application of the vector field method to the analysis and design of real concentration and illumination systems. We propose the development of a calculation tool for optical simulations by vector field, using algorithms based on Fermat`s principle, as an alternative to traditional tools for optical simulations by raytrace, based on reflection and refraction law. This new tool provides, first, traditional simulations results: efficiency, illuminance/irradiance calculations, angular distribution of light- with lower computation time, photometrical information needs about a few tens of field lines, in comparison with million rays needed nowadays. On the other hand the tool will provides new information as vector field maps produced by the system, composed by field lines and quasipotential surfaces. We show our first results with the vector field simulation tool.

  4. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2015-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  5. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  6. Polarization control method for UV writing of advanced bragg gratings

    DEFF Research Database (Denmark)

    Deyerl, Hans-Jürgen; Plougmann, Nikolai; Jensen, Jesper Bo Damm

    2002-01-01

    We report the application of the polarization control method for the UV writing of advanced fiber Bragg gratings (FBG). We demonstrate the strength of the new method for different apodization profiles, including the Sinc-profile and two designs for dispersion-free square filters. The method has...

  7. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  8. Strategy to Promote Active Learning of an Advanced Research Method

    Science.gov (United States)

    McDermott, Hilary J.; Dovey, Terence M.

    2013-01-01

    Research methods courses aim to equip students with the knowledge and skills required for research yet seldom include practical aspects of assessment. This reflective practitioner report describes and evaluates an innovative approach to teaching and assessing advanced qualitative research methods to final-year psychology undergraduate students. An…

  9. Methods for studying fuel management in advanced gas cooled reactors

    International Nuclear Information System (INIS)

    Buckler, A.N.; Griggs, C.F.; Tyror, J.G.

    1971-07-01

    The methods used for studying fuel and absorber management problems in AGRs are described. The basis of the method is the use of ARGOSY lattice data in reactor calculations performed at successive time steps. These reactor calculations may be quite crude but for advanced design calculations a detailed channel-by-channel representation of the whole core is required. The main emphasis of the paper is in describing such an advanced approach - the ODYSSEUS-6 code. This code evaluates reactor power distributions as a function of time and uses the information to select refuelling moves and determine controller positions. (author)

  10. Method and Tools for Development of Advanced Instructional Systems

    NARCIS (Netherlands)

    Arend, J. van der; Riemersma, J.B.J.

    1994-01-01

    The application of advanced instructional systems (AISs), like computer-based training systems, intelligent tutoring systems and training simulators, is widely spread within the Royal Netherlands Army. As a consequence there is a growing interest in methods and tools to develop effective and

  11. METHODS ADVANCEMENT FOR MILK ANALYSIS: THE MAMA STUDY

    Science.gov (United States)

    The Methods Advancement for Milk Analysis (MAMA) study was designed by US EPA and CDC investigators to provide data to support the technological and study design needs of the proposed National Children=s Study (NCS). The NCS is a multi-Agency-sponsored study, authorized under the...

  12. Advances in the Analytical Methods for Determining the Antioxidant ...

    African Journals Online (AJOL)

    Advances in the Analytical Methods for Determining the Antioxidant Properties of Honey: A Review. M Moniruzzaman, MI Khalil, SA Sulaiman, SH Gan. Abstract. Free radicals and reactive oxygen species (ROS) have been implicated in contributing to the processes of aging and disease. In an effort to combat free radical ...

  13. Advanced Measuring (Instrumentation Methods for Nuclear Installations: A Review

    Directory of Open Access Journals (Sweden)

    Wang Qiu-kuan

    2012-01-01

    Full Text Available The nuclear technology has been widely used in the world. The research of measurement in nuclear installations involves many aspects, such as nuclear reactors, nuclear fuel cycle, safety and security, nuclear accident, after action, analysis, and environmental applications. In last decades, many advanced measuring devices and techniques have been widely applied in nuclear installations. This paper mainly introduces the development of the measuring (instrumentation methods for nuclear installations and the applications of these instruments and methods.

  14. Higher geometry an introduction to advanced methods in analytic geometry

    CERN Document Server

    Woods, Frederick S

    2005-01-01

    For students of mathematics with a sound background in analytic geometry and some knowledge of determinants, this volume has long been among the best available expositions of advanced work on projective and algebraic geometry. Developed from Professor Woods' lectures at the Massachusetts Institute of Technology, it bridges the gap between intermediate studies in the field and highly specialized works.With exceptional thoroughness, it presents the most important general concepts and methods of advanced algebraic geometry (as distinguished from differential geometry). It offers a thorough study

  15. An advanced probabilistic structural analysis method for implicit performance functions

    Science.gov (United States)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  16. Advanced non-destructive methods for an efficient service performance

    International Nuclear Information System (INIS)

    Rauschenbach, H.; Clossen-von Lanken Schulz, M.; Oberlin, R.

    2015-01-01

    Due to the power generation industry's desire to decrease outage time and extend inspection intervals for highly stressed turbine parts, advanced and reliable Non-destructive methods were developed by Siemens Non-destructive laboratory. Effective outage performance requires the optimized planning of all outage activities as well as modern Non-destructive examination methods, in order to examine the highly stressed components (turbine rotor, casings, valves, generator rotor) reliably and in short periods of access. This paper describes the experience of Siemens Energy with an ultrasonic Phased Array inspection technique for the inspection of radial entry pinned turbine blade roots. The developed inspection technique allows the ultrasonic inspection of steam turbine blades without blade removal. Furthermore advanced Non-destructive examination methods for joint bolts will be described, which offer a significant reduction of outage duration in comparison to conventional inspection techniques. (authors)

  17. Advanced airflow distribution methods for reducing exposure of indoor pollution

    DEFF Research Database (Denmark)

    Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen

    2017-01-01

    The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...... methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor...

  18. Advanced airflow distribution methods for reducing exposure of indoor pollution

    DEFF Research Database (Denmark)

    Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen Krikor

    methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor......The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...

  19. Recent advances in radial basis function collocation methods

    CERN Document Server

    Chen, Wen; Chen, C S

    2014-01-01

    This book surveys the latest advances in radial basis function (RBF) meshless collocation methods which emphasis on recent novel kernel RBFs and new numerical schemes for solving partial differential equations. The RBF collocation methods are inherently free of integration and mesh, and avoid tedious mesh generation involved in standard finite element and boundary element methods. This book focuses primarily on the numerical algorithms, engineering applications, and highlights a large class of novel boundary-type RBF meshless collocation methods. These methods have shown a clear edge over the traditional numerical techniques especially for problems involving infinite domain, moving boundary, thin-walled structures, and inverse problems. Due to the rapid development in RBF meshless collocation methods, there is a need to summarize all these new materials so that they are available to scientists, engineers, and graduate students who are interest to apply these newly developed methods for solving real world’s ...

  20. Advanced Markov chain Monte Carlo methods learning from past samples

    CERN Document Server

    Liang, Faming; Carrol, Raymond J

    2010-01-01

    This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

  1. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  2. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  3. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  4. Advanced soft computing diagnosis method for tumour grading.

    Science.gov (United States)

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  5. Digital spectral analysis parametric, non-parametric and advanced methods

    CERN Document Server

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  6. Advanced method of double contrast examination of the stomach

    International Nuclear Information System (INIS)

    Vlasov, P.V.; Yakimenko, V.F.

    1981-01-01

    An advanced method of double contrast examination of the stomach with the use of high concentrated barium suspension is described. It is shown that concentration of barium suspension must be not less than 200 mass/volume per cent to obtain the sharp image of the mucosal microrelief 6 standard position are recommended for the double contrast examination of all stomach walls. 200 patients with different digestive system diseases are examined with the help of developed methods. The sharp image of the mucosal microrelief is obtained in 70% cases [ru

  7. Advances in Statistical Methods for Substance Abuse Prevention Research

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.

    2010-01-01

    The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467

  8. NATO Advanced Research Workshop on Vectorization of Advanced Methods for Molecular Electronic Structure

    CERN Document Server

    1984-01-01

    That there have been remarkable advances in the field of molecular electronic structure during the last decade is clear not only to those working in the field but also to anyone else who has used quantum chemical results to guide their own investiga­ tions. The progress in calculating the electronic structures of molecules has occurred through the truly ingenious theoretical and methodological developments that have made computationally tractable the underlying physics of electron distributions around a collection of nuclei. At the same time there has been consider­ able benefit from the great advances in computer technology. The growing sophistication, declining costs and increasing accessibi­ lity of computers have let theorists apply their methods to prob­ lems in virtually all areas of molecular science. Consequently, each year witnesses calculations on larger molecules than in the year before and calculations with greater accuracy and more com­ plete information on molecular properties. We can surel...

  9. Advanced codes and methods supporting improved fuel cycle economics - 5493

    International Nuclear Information System (INIS)

    Curca-Tivig, F.; Maupin, K.; Thareau, S.

    2015-01-01

    AREVA's code development program was practically completed in 2014. The basic codes supporting a new generation of advanced methods are the followings. GALILEO is a state-of-the-art fuel rod performance code for PWR and BWR applications. Development is completed, implementation started in France and the U.S.A. ARCADIA-1 is a state-of-the-art neutronics/ thermal-hydraulics/ thermal-mechanics code system for PWR applications. Development is completed, implementation started in Europe and in the U.S.A. The system thermal-hydraulic codes S-RELAP5 and CATHARE-2 are not really new but still state-of-the-art in the domain. S-RELAP5 was completely restructured and re-coded such that its life cycle increases by further decades. CATHARE-2 will be replaced in the future by the new CATHARE-3. The new AREVA codes and methods are largely based on first principles modeling with an extremely broad international verification and validation data base. This enables AREVA and its customers to access more predictable licensing processes in a fast evolving regulatory environment (new safety criteria, requests for enlarged qualification databases, statistical applications, uncertainty propagation...). In this context, the advanced codes and methods and the associated verification and validation represent the key to avoiding penalties on products, on operational limits, or on methodologies themselves

  10. Advances in product family and product platform design methods & applications

    CERN Document Server

    Jiao, Jianxin; Siddique, Zahed; Hölttä-Otto, Katja

    2014-01-01

    Advances in Product Family and Product Platform Design: Methods & Applications highlights recent advances that have been made to support product family and product platform design and successful applications in industry. This book provides not only motivation for product family and product platform design—the “why” and “when” of platforming—but also methods and tools to support the design and development of families of products based on shared platforms—the “what”, “how”, and “where” of platforming. It begins with an overview of recent product family design research to introduce readers to the breadth of the topic and progresses to more detailed topics and design theory to help designers, engineers, and project managers plan, architect, and implement platform-based product development strategies in their companies. This book also: Presents state-of-the-art methods and tools for product family and product platform design Adopts an integrated, systems view on product family and pro...

  11. Combinatorial methods for advanced materials research and development

    Energy Technology Data Exchange (ETDEWEB)

    Cremer, R.; Dondorf, S.; Hauck, M.; Horbach, D.; Kaiser, M.; Krysta, S.; Kyrylov, O.; Muenstermann, E.; Philipps, M.; Reichert, K.; Strauch, G. [Rheinisch-Westfaelische Technische Hochschule Aachen (Germany). Lehrstuhl fuer Theoretische Huettenkunde

    2001-10-01

    The applicability of combinatorial methods in developing advanced materials is illustrated presenting four examples for the deposition and characterization of one- and two-dimensionally laterally graded coatings, which were deposited by means of (reactive) magnetron sputtering and plasma-enhanced chemical vapor deposition. To emphasize the advantages of combinatorial approaches, metastable hard coatings like (Ti,Al)N and (Ti,Al,Hf)N respectively, as well as Ge-Sb-Te based films for rewritable optical data storage were investigated with respect to the relations between structure, composition, and the desired materials properties. (orig.)

  12. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  13. Advanced methods for fabrication of PHWR and LMFBR fuels

    International Nuclear Information System (INIS)

    Ganguly, C.

    1988-01-01

    For self-reliance in nuclear power, the Department of Atomic Energy (DAE), India is pursuing two specific reactor systems, namely the pressurised heavy water reactors (PHWR) and the liquid metal cooled fast breeder reactors (LMFBR). The reference fuel for PHWR is zircaloy-4 clad high density (≤ 96 per cent T.D.) natural UO 2 pellet-pins. The advanced PHWR fuels are UO 2 -PuO 2 (≤ 2 per cent), ThO 2 -PuO 2 (≤ 4 per cent) and ThO 2 -U 233 O 2 (≤ 2 per cent). Similarly, low density (≤ 85 per cent T.D.) (UPu)O 2 pellets clad in SS 316 or D9 is the reference fuel for the first generation of prototype and commercial LMFBRs all over the world. However, (UPu)C and (UPu)N are considered as advanced fuels for LMFBRs mainly because of their shorter doubling time. The conventional method of fabrication of both high and low density oxide, carbide and nitride fuel pellets starting from UO 2 , PuO 2 and ThO 2 powders is 'powder metallurgy (P/M)'. The P/M route has, however, the disadvantage of generation and handling of fine powder particles of the fuel and the associated problem of 'radiotoxic dust hazard'. The present paper summarises the state-of-the-art of advanced methods of fabrication of oxide, carbide and nitride fuels and highlights the author's experience on sol-gel-microsphere-pelletisation (SGMP) route for preparation of these materials. The SGMP process uses sol gel derived, dust-free and free-flowing microspheres of oxides, carbide or nitride for direct pelletisation and sintering. Fuel pellets of both low and high density, excellent microhomogeneity and controlled 'open' or 'closed' porosity could be fabricated via the SGMP route. (author). 5 tables, 14 figs., 15 refs

  14. The application of advanced rotor (performance) methods for design calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bussel, G.J.W. van [Delft Univ. of Technology, Inst. for Wind Energy, Delft (Netherlands)

    1997-08-01

    The calculation of loads and performance of wind turbine rotors has been a topic for research over the last century. The principles for the calculation of loads on rotor blades with a given specific geometry, as well as the development of optimal shaped rotor blades have been published in the decades that significant aircraft development took place. Nowadays advanced computer codes are used for specific problems regarding modern aircraft, and application to wind turbine rotors has also been performed occasionally. The engineers designing rotor blades for wind turbines still use methods based upon global principles developed in the beginning of the century. The question what to expect in terms of the type of methods to be applied in a design environment for the near future is addressed here. (EG) 14 refs.

  15. Methods and Systems for Advanced Spaceport Information Management

    Science.gov (United States)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  16. Calculation methods for advanced concept light water reactor lattices

    International Nuclear Information System (INIS)

    Carmona, S.

    1986-01-01

    In the last few years s several advanced concepts for fuel rod lattices have been studied. Improved fuel utilization is one of the major aims in the development of new fuel rod designs and lattice modifications. By these changes s better performance in fuel economics s fuel burnup and material endurance can be achieved in the frame of the well-known basic Light Water Reactor technology. Among the new concepts involved in these studies that have attracted serious attention are lattices consisting of arrays of annular rods duplex pellet rods or tight multicells. These new designs of fuel rods and lattices present several computational problems. The treatment of resonance shielded cross sections is a crucial point in the analyses of these advanced concepts . The purpose of this study was to assess adequate approximation methods for calculating as accurately as possible, resonance shielding for these new lattices. Although detailed and exact computational methods for the evaluation of the resonance shielding in these lattices are possible, they are quite inefficient when used in lattice codes. The computer time and memory required for this kind of computations are too large to be used in an acceptable routine manner. In order to over- come these limitations and to make the analyses possible with reasonable use of computer resources s approximation methods are necessary. Usual approximation methods, for the resonance energy regions used in routine lattice computer codes, can not adequately handle the evaluation of these new fuel rod lattices. The main contribution of the present work to advanced lattice concepts is the development of an equivalence principle for the calculation of resonance shielding in the annular fuel pellet zone of duplex pellets; the duplex pellet in this treatment consists of two fuel zones with the same absorber isotope in both regions. In the transition from a single duplex rod to an infinite array of this kind of fuel rods, the similarity of the

  17. Advances in the Surface Renewal Flux Measurement Method

    Science.gov (United States)

    Shapland, T. M.; McElrone, A.; Paw U, K. T.; Snyder, R. L.

    2011-12-01

    The measurement of ecosystem-scale energy and mass fluxes between the planetary surface and the atmosphere is crucial for understanding geophysical processes. Surface renewal is a flux measurement technique based on analyzing the turbulent coherent structures that interact with the surface. It is a less expensive technique because it does not require fast-response velocity measurements, but only a fast-response scalar measurement. It is therefore also a useful tool for the study of the global cycling of trace gases. Currently, surface renewal requires calibration against another flux measurement technique, such as eddy covariance, to account for the linear bias of its measurements. We present two advances in the surface renewal theory and methodology that bring the technique closer to becoming a fully independent flux measurement method. The first advance develops the theory of turbulent coherent structure transport associated with the different scales of coherent structures. A novel method was developed for identifying the scalar change rate within structures at different scales. Our results suggest that for canopies less than one meter in height, the second smallest coherent structure scale dominates the energy and mass flux process. Using the method for resolving the scalar exchange rate of the second smallest coherent structure scale, calibration is unnecessary for surface renewal measurements over short canopies. This study forms the foundation for analysis over more complex surfaces. The second advance is a sensor frequency response correction for measuring the sensible heat flux via surface renewal. Inexpensive fine-wire thermocouples are frequently used to record high frequency temperature data in the surface renewal technique. The sensible heat flux is used in conjunction with net radiation and ground heat flux measurements to determine the latent heat flux as the energy balance residual. The robust thermocouples commonly used in field experiments

  18. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  19. Advanced methods for the study of PWR cores

    International Nuclear Information System (INIS)

    Lambert, M.; Salvatores, St.; Ferrier, A.; Pelet, J.; Nicaise, N.; Pouliquen, J.Y.; Foret, F.; Chauliac, C.; Johner, J.; Cohen, Ch.

    2003-01-01

    This document gathers the transparencies presented at the 6. technical session of the French nuclear energy society (SFEN) in October 2003. The transparencies of the annual meeting are presented in the introductive part: 1 - status of the French nuclear park: nuclear energy results, management of an exceptional climatic situation: the heat wave of summer 2003 and the power generation (J.C. Barral); 2 - status of the research on controlled thermonuclear fusion (J. Johner). Then follows the technical session about the advanced methods for the study of PWR reactor cores: 1 - the evolution approach of study methodologies (M. Lambert, J. Pelet); 2 - the point of view of the nuclear safety authority (D. Brenot); 3 - the improved decoupled methodology for the steam pipe rupture (S. Salvatores, J.Y. Pouliquen); 4 - the MIR method for the pellet-clad interaction (renovated IPG methodology) (E. Baud, C. Royere); 5 - the improved fuel management (IFM) studies for Koeberg (C. Cohen); 6 - principle of the methods of accident study implemented for the European pressurized reactor (EPR) (F. Foret, A. Ferrier); 7 - accident studies with the EPR, steam pipe rupture (N. Nicaise, S. Salvatores); 8 - the co-development platform, a new generation of software tools for the new methodologies (C. Chauliac). (J.S.)

  20. Advances in Time Estimation Methods for Molecular Data.

    Science.gov (United States)

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  1. Advances in Airborne and Ground Geophysical Methods for Uranium Exploration

    International Nuclear Information System (INIS)

    2013-01-01

    through the use of effective exploration techniques. Geophysical methods with the capability of mapping surface and subsurface parameters in relation to uranium deposition and accumulation are proving to be vital components of current exploration efforts around the world. There is continuous development and improvement of technical and scientific disciplines using measuring instruments and spatially referenced data processing techniques. Newly designed geophysical instruments and their applications in uranium exploration are contributing to an increased probability of successful discoveries. Dissemination of information on advances in geophysical techniques encourages new strategies and promotes new approaches toward uranium exploration. Meetings and conferences organized by the IAEA, collecting the experience of participating countries, as well as its publications and the International Nuclear Information System, play an important role in the dissemination of knowledge of all aspects of the nuclear fuel cycle. The purpose of this report is to highlight advances in airborne and ground geophysical techniques, succinctly describing modern geophysical methods and demonstrating the application of techniques through examples. The report also provides some basic concepts of radioactivity, nuclear radiation and interaction with matter.

  2. Advanced methods of quality control in nuclear fuel fabrication

    International Nuclear Information System (INIS)

    Onoufriev, Vladimir

    2004-01-01

    Under pressure of current economic and electricity market situation utilities implement more demanding fuel utilization schemes including higher burn ups and thermal rates, longer fuel cycles and usage of Mo fuel. Therefore, fuel vendors have recently initiated new R and D programmes aimed at improving fuel quality, design and materials to produce robust and reliable fuel. In the beginning of commercial fuel fabrication, emphasis was given to advancements in Quality Control/Quality Assurance related mainly to product itself. During recent years, emphasis was transferred to improvements in process control and to implementation of overall Total Quality Management (TQM) programmes. In the area of fuel quality control, statistical control methods are now widely implemented replacing 100% inspection. This evolution, some practical examples and IAEA activities are described in the paper. The paper presents major findings of the latest IAEA Technical Meetings (TMs) and training courses in the area with emphasis on information received at the TM and training course held in 1999 and other latest publications to provide an overview of new developments in process/quality control, their implementation and results obtained including new approaches to QC

  3. Underwater Photosynthesis of Submerged Plants – Recent Advances and Methods

    Science.gov (United States)

    Pedersen, Ole; Colmer, Timothy D.; Sand-Jensen, Kaj

    2013-01-01

    We describe the general background and the recent advances in research on underwater photosynthesis of leaf segments, whole communities, and plant dominated aquatic ecosystems and present contemporary methods tailor made to quantify photosynthesis and carbon fixation under water. The majority of studies of aquatic photosynthesis have been carried out with detached leaves or thalli and this selectiveness influences the perception of the regulation of aquatic photosynthesis. We thus recommend assessing the influence of inorganic carbon and temperature on natural aquatic communities of variable density in addition to studying detached leaves in the scenarios of rising CO2 and temperature. Moreover, a growing number of researchers are interested in tolerance of terrestrial plants during flooding as torrential rains sometimes result in overland floods that inundate terrestrial plants. We propose to undertake studies to elucidate the importance of leaf acclimation of terrestrial plants to facilitate gas exchange and light utilization under water as these acclimations influence underwater photosynthesis as well as internal aeration of plant tissues during submergence. PMID:23734154

  4. Striking against bioterrorism with advanced proteomics and reference methods.

    Science.gov (United States)

    Armengaud, Jean

    2017-01-01

    The intentional use by terrorists of biological toxins as weapons has been of great concern for many years. Among the numerous toxins produced by plants, animals, algae, fungi, and bacteria, ricin is one of the most scrutinized by the media because it has already been used in biocrimes and acts of bioterrorism. Improving the analytical toolbox of national authorities to monitor these potential bioweapons all at once is of the utmost interest. MS/MS allows their absolute quantitation and exhibits advantageous sensitivity, discriminative power, multiplexing possibilities, and speed. In this issue of Proteomics, Gilquin et al. (Proteomics 2017, 17, 1600357) present a robust multiplex assay to quantify a set of eight toxins in the presence of a complex food matrix. This MS/MS reference method is based on scheduled SRM and high-quality standards consisting of isotopically labeled versions of these toxins. Their results demonstrate robust reliability based on rather loose scheduling of SRM transitions and good sensitivity for the eight toxins, lower than their oral median lethal doses. In the face of an increased threat from terrorism, relevant reference assays based on advanced proteomics and high-quality companion toxin standards are reliable and firm answers. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Application of the Advanced Distillation Curve Method to Fuels for Advanced Combustion Engine Gasolines

    KAUST Repository

    Burger, Jessica L.

    2015-07-16

    © This article not subject to U.S. Copyright. Published 2015 by the American Chemical Society. Incremental but fundamental changes are currently being made to fuel composition and combustion strategies to diversify energy feedstocks, decrease pollution, and increase engine efficiency. The increase in parameter space (by having many variables in play simultaneously) makes it difficult at best to propose strategic changes to engine and fuel design by use of conventional build-and-test methodology. To make changes in the most time- and cost-effective manner, it is imperative that new computational tools and surrogate fuels are developed. Currently, sets of fuels are being characterized by industry groups, such as the Coordinating Research Council (CRC) and other entities, so that researchers in different laboratories have access to fuels with consistent properties. In this work, six gasolines (FACE A, C, F, G, I, and J) are characterized by the advanced distillation curve (ADC) method to determine the composition and enthalpy of combustion in various distillate volume fractions. Tracking the composition and enthalpy of distillate fractions provides valuable information for determining structure property relationships, and moreover, it provides the basis for the development of equations of state that can describe the thermodynamic properties of these complex mixtures and lead to development of surrogate fuels composed of major hydrocarbon classes found in target fuels.

  6. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  7. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    International Nuclear Information System (INIS)

    Blanford, E.; Keldrauk, E.; Laufer, M.; Mieler, M.; Wei, J.; Stojadinovic, B.; Peterson, P.F.

    2010-01-01

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  8. Advanced Methods for Direct Ink Write Additive Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Compel, W. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lewicki, J. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-24

    Lawrence Livermore National Laboratory is one of the world’s premier labs for research and development of additive manufacturing processes. Out of these many processes, direct ink write (DIW) is arguably one of the most relevant for the manufacture of architected polymeric materials, components and hardware. However, a bottleneck in this pipeline that has largely been ignored to date is the lack of advanced software implementation with respect to toolpath execution. There remains to be a convenient, automated method to design and produce complex parts that is user-friendly and enabling for the realization of next generation designs and structures. For a material to be suitable as a DIW ink it must possess the appropriate rheological properties for this process. Most importantly, the material must exhibit shear-thinning in order to extrude through a print head and have a rapid recovery of its static shear modulus. This makes it possible for the extrudate to be self-supporting upon exiting the print head. While this and other prerequisites narrow the scope of ‘offthe- shelf’ printable materials directly amenable to DIW, the process still tolerates a wide range of potential feedstock materials. These include metallic alloys, inorganic solvent borne dispersions, polymeric melts, filler stabilized monomer compositions, pre-elastomeric feedstocks and thermoset resins each of which requires custom print conditions tailored to the individual ink. As such, an ink perfectly suited for DIW may be prematurely determined to be undesirable for the process if printed under the wrong conditions. Defining appropriate print conditions such as extrusion rate, layer height, and maximum bridge length is a vital first step in validating an ink’s DIW capability.

  9. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  10. Preliminary study of clinical staging of moderately advanced and advanced thoracic esophageal carcinoma treated by non-surgical methods

    International Nuclear Information System (INIS)

    Zhu Shuchai; Li Ren; Li Juan; Qiu Rong; Han Chun; Wan Jun

    2004-01-01

    Objective: To explore the clinical staging of moderately advanced and advanced thoracic esophageal carcinoma by evaluating the prognosis and provide criteria for individual treatment. Methods: The authors retrospectively analyzed 500 patients with moderately advanced and advanced thoracic esophageal carcinoma treated by radiotherapy alone. According to the primary lesion length by barium meal X-ray film, the invasion range and the relation between location and the surrounding organs by CT scans the disease category was classified by a 6 stage method and a 4 stage method. With the primary lesion divide into T1, T2a, T2b, T3a, T3b and T4 incorporating the locregional lymph node metastasis, a 6 stage system was obtained, I, IIa , IIb, IIIa, IIIb and IV. The results of this as compared with those of 4 stage system, the following data were finally arrived at. Results: Among the 500 cases, there were T1 23, T2a 111, T2b 157, T3a 84, T3b 82 and T4 43. The survival rates of these six categories showed significant differences (χ 2 =63.32, P 2 =56.29, P 2 =94.29, P 2 =83.48, P<0.05). Conclusions: Both the 6 stage and 4 stage systems are adaptable to predict prognosis of moderately advanced and advanced esophageal carcinoma treated by radiotherapy alone. For simplicity and convenience, the 4 stage classification is recommended. (authors)

  11. Advancing multilevel thinking and methods in HRM research

    NARCIS (Netherlands)

    Renkema, Maarten; Meijerink, Jeroen Gerard; Bondarouk, Tatiana

    2016-01-01

    Purpose Despite the growing belief that multilevel research is necessary to advance HRM understanding, there remains a lack of multilevel thinking – the application of principles for multilevel theory building. The purpose of this paper is to propose a systematic approach for multilevel HRM

  12. Combination of retrograde superselective intra-arterial chemotherapy and Seldinger method in locally advanced oral cancer

    Directory of Open Access Journals (Sweden)

    Masataka Uehara

    2015-01-01

    Full Text Available The nonsurgical strategies for locally advanced oral cancer are desirable. Superselective intra-arterial infusion with radiotherapy was utilized for this purpose, and there are two types of superselective intra-arterial infusion methods: The Seldinger method and the retrograde superselective intra-arterial chemotherapy (HFT method. In one case, the HFT method was applied to locally advanced tongue cancer, and the Seldinger method was used for additional administration of cisplatin (CDDP to compensate for a lack of drug flow in the HFT method. In another case, the HFT method was applied to locally advanced lower gingival cancer. The Seldinger method was applied to metastatic lymph nodes. In both cases, additional administration of CDDP using the Seldinger method resulted in a complete response. The combination of the HFT and Seldinger methods was useful to eradicate locally advanced oral cancer because each method compensated for the defects of the other.

  13. Exploring biomolecular dynamics and interactions using advanced sampling methods

    International Nuclear Information System (INIS)

    Luitz, Manuel; Bomblies, Rainer; Ostermeir, Katja; Zacharias, Martin

    2015-01-01

    Molecular dynamics (MD) and Monte Carlo (MC) simulations have emerged as a valuable tool to investigate statistical mechanics and kinetics of biomolecules and synthetic soft matter materials. However, major limitations for routine applications are due to the accuracy of the molecular mechanics force field and due to the maximum simulation time that can be achieved in current simulations studies. For improving the sampling a number of advanced sampling approaches have been designed in recent years. In particular, variants of the parallel tempering replica-exchange methodology are widely used in many simulation studies. Recent methodological advancements and a discussion of specific aims and advantages are given. This includes improved free energy simulation approaches and conformational search applications. (topical review)

  14. Experiences from introduction of peer-to-peer teaching methods in Advanced Biochemistry E2010

    DEFF Research Database (Denmark)

    Brodersen, Ditlev; Etzerodt, Michael; Rasmussen, Jan Trige

    2012-01-01

    During the autumn semester 2010, we experimented with a range of active teaching methods on the course, Advanced Biochemistry, at the Department of Molecular Biology and Genetics.......During the autumn semester 2010, we experimented with a range of active teaching methods on the course, Advanced Biochemistry, at the Department of Molecular Biology and Genetics....

  15. Advanced methods of analysis variance on scenarios of nuclear prospective

    International Nuclear Information System (INIS)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-01-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  16. Advanced construction methods for new nuclear power plants

    International Nuclear Information System (INIS)

    Bilbao y Leon, Sama; Cleveland, John; Moon, Seong-Gyun; Tyobeka, Bismark

    2009-01-01

    The length of the construction and commissioning phases of nuclear power plants have historically been longer than for conventional fossil fuelled plants, often having a record of delays and cost overruns as a result from several factors including legal interventions and revisions of safety regulations. Recent nuclear construction projects however, have shown that long construction periods for nuclear power plants are no longer the norm. While there are several inter-related factors that influence the construction time, the use of advanced construction techniques has contributed significantly to reducing the construction length of recent nuclear projects. (author)

  17. Advanced 3D inverse method for designing turbomachine blades

    Energy Technology Data Exchange (ETDEWEB)

    Dang, T. [Syracuse Univ., NY (United States)

    1995-10-01

    To meet the goal of 60% plant-cycle efficiency or better set in the ATS Program for baseload utility scale power generation, several critical technologies need to be developed. One such need is the improvement of component efficiencies. This work addresses the issue of improving the performance of turbo-machine components in gas turbines through the development of an advanced three-dimensional and viscous blade design system. This technology is needed to replace some elements in current design systems that are based on outdated technology.

  18. Advances in beam position monitoring methods at GSI synchrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Rahul; Reiter, Andreas; Forck, Peter; Kowina, Piotr; Lang, Kevin; Miedzik, Piotr [GSI, Darmstadt (Germany)

    2016-07-01

    At the GSI synchrotron facilities, capacitive beam pick-up signals for position evaluation are immediately digitized within the acquisition electronics due to availability of reliable, fast and high resolution ADCs. The signal processing aspects are therefore fully dealt with in the digital domain. Novel digital techniques for asynchronous and synchronous (bunch-by-bunch) beam position estimation have been developed at GSI SIS-18 and CRYRING as part of FAIR development program. This contribution will highlight the advancements and its impact on the operational ease and high availability of the BPM systems.

  19. Nonlinear dynamics of rotating shallow water methods and advances

    CERN Document Server

    Zeitlin, Vladimir

    2007-01-01

    The rotating shallow water (RSW) model is of wide use as a conceptual tool in geophysical fluid dynamics (GFD), because, in spite of its simplicity, it contains all essential ingredients of atmosphere and ocean dynamics at the synoptic scale, especially in its two- (or multi-) layer version. The book describes recent advances in understanding (in the framework of RSW and related models) of some fundamental GFD problems, such as existence of the slow manifold, dynamical splitting of fast (inertia-gravity waves) and slow (vortices, Rossby waves) motions, nonlinear geostrophic adjustment and wa

  20. Advances in Probes and Methods for Clinical EPR Oximetry

    Science.gov (United States)

    Hou, Huagang; Khan, Nadeem; Jarvis, Lesley A.; Chen, Eunice Y.; Williams, Benjamin B.; Kuppusamy, Periannan

    2015-01-01

    EPR oximetry, which enables reliable, accurate, and repeated measurements of the partial pressure of oxygen in tissues, provides a unique opportunity to investigate the role of oxygen in the pathogenesis and treatment of several diseases including cancer, stroke, and heart failure. Building on significant advances in the in vivo application of EPR oximetry for small animal models of disease, we are developing suitable probes and instrumentation required for use in human subjects. Our laboratory has established the feasibility of clinical EPR oximetry in cancer patients using India ink, the only material presently approved for clinical use. We now are developing the next generation of probes, which are both superior in terms of oxygen sensitivity and biocompatibility including an excellent safety profile for use in humans. Further advances include the development of implantable oxygen sensors linked to an external coupling loop for measurements of deep-tissue oxygenations at any depth, overcoming the current limitation of 10 mm. This paper presents an overview of recent developments in our ability to make meaningful measurements of oxygen partial pressures in human subjects under clinical settings. PMID:24729217

  1. Classification methods for noise transients in advanced gravitational-wave detectors II: performance tests on Advanced LIGO data

    International Nuclear Information System (INIS)

    Powell, Jade; Heng, Ik Siong; Torres-Forné, Alejandro; Font, José A; Lynch, Ryan; Trifirò, Daniele; Cuoco, Elena; Cavaglià, Marco

    2017-01-01

    The data taken by the advanced LIGO and Virgo gravitational-wave detectors contains short duration noise transients that limit the significance of astrophysical detections and reduce the duty cycle of the instruments. As the advanced detectors are reaching sensitivity levels that allow for multiple detections of astrophysical gravitational-wave sources it is crucial to achieve a fast and accurate characterization of non-astrophysical transient noise shortly after it occurs in the detectors. Previously we presented three methods for the classification of transient noise sources. They are Principal Component Analysis for Transients (PCAT), Principal Component LALInference Burst (PC-LIB) and Wavelet Detection Filter with Machine Learning (WDF-ML). In this study we carry out the first performance tests of these algorithms on gravitational-wave data from the Advanced LIGO detectors. We use the data taken between the 3rd of June 2015 and the 14th of June 2015 during the 7th engineering run (ER7), and outline the improvements made to increase the performance and lower the latency of the algorithms on real data. This work provides an important test for understanding the performance of these methods on real, non stationary data in preparation for the second advanced gravitational-wave detector observation run, planned for later this year. We show that all methods can classify transients in non stationary data with a high level of accuracy and show the benefits of using multiple classifiers. (paper)

  2. Advanced methods of microscope control using μManager software.

    Science.gov (United States)

    Edelstein, Arthur D; Tsuchida, Mark A; Amodaj, Nenad; Pinkard, Henry; Vale, Ronald D; Stuurman, Nico

    μManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, μManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced μManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging.

  3. Proceedings of national workshop on advanced methods for materials characterization

    International Nuclear Information System (INIS)

    2004-10-01

    During the past two decades there had been tremendous growth in the field of material science and a variety of new materials with user specific properties have been developed such as smart shape memory alloys, hybrid materials like glass-ceramics, cermets, met-glasses, inorganic- organic composite layered structures, mixed oxides with negative thermal expansion, functional polymer materials etc. Study of nano-particles and the materials assembled from such particles is another area of active research being pursued all over the world. Preparation and characterization of nano-sized materials is a challenge because of their dimensions and size dependent properties. This has led to the emergence of a variety of advanced techniques, which need to be brought to the attention of the researchers working in the field of material science which requires the expertise of physics, chemistry and process engineering. This volume deals with above aspects and papers relevant to INIS are indexed separately

  4. Recent advances in neutral particle transport methods and codes

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned

  5. Advanced methods of microscope control using μManager software

    Directory of Open Access Journals (Sweden)

    Arthur D Edelstein

    2014-07-01

    Full Text Available µManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, µManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced µManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging.

  6. Advances in surface wave methods: Cascaded MASW-SASW

    NARCIS (Netherlands)

    Westerhoff, R.S.; Brouwer, J.H.; Meekes, J.A.C.

    2005-01-01

    The application of the MASW method in areas that show strong lateral variations in subsurface properties is limited. Traditional SASW may yield a better lateral resolution but the dispersion curves (and thus the subsurface models) obtained with the method may be poor. The joint application of MASW

  7. Adherence to Scientific Method while Advancing Exposure Science

    Science.gov (United States)

    Paul Lioy was simultaneously a staunch adherent to the scientific method and an innovator of new ways to conduct science, particularly related to human exposure. Current challenges to science and the application of the scientific method are presented as they relate the approaches...

  8. Ultrasonic and advanced methods for nondestructive testing and material characterization

    National Research Council Canada - National Science Library

    Chen, C. H

    2007-01-01

    ... and physics among others. There are at least two dozen NDT methods in use. In fact any sensor that can examine the inside of material nondestructively is useful for NDT. However the ultrasonic methods are still most popular because of its capability, flexibility, and relative cost effectiveness. For this reason this book places a heavy emphasis...

  9. Advanced RF-KO slow-extraction method for the reduction of spill ripple

    CERN Document Server

    Noda, K; Shibuya, S; Uesugi, T; Muramatsu, M; Kanazawa, M; Takada, E; Yamada, S

    2002-01-01

    Two advanced RF-knockout (RF-KO) slow-extraction methods have been developed at HIMAC in order to reduce the spill ripple for accurate heavy-ion cancer therapy: the dual frequency modulation (FM) method and the separated function method. As a result of simulations and experiments, it was verified that the spill ripple could be considerably reduced using these advanced methods, compared with the ordinary RF-KO method. The dual FM method and the separated function method bring about a low spill ripple within standard deviations of around 25% and of 15% during beam extraction within around 2 s, respectively, which are in good agreement with the simulation results.

  10. Method of advancing research and development of fast breeder reactors

    International Nuclear Information System (INIS)

    1988-01-01

    In the long term plan of atomic energy development and utilization, fast breeder reactors are to be developed as the main of the future nuclear power generation in Japan, and when their development is advanced, it has been decided to positively aim at building up the plutonium utilization system using FBRs superior to the uranium utilization system using LWRs. Also it has been decided that the development of FBRs requires to exert incessant efforts for a considerable long period under the proper cooperation system of government and people, and as for its concrete development, hereafter the deliberation is to be carried out in succession by the expert subcommittee on FBR development projects of the Atomic Energy Commission. The subcommittee was founded in May, 1986, to deliberate on the long term promotion measures for FBR development, the measures for promoting the research and development, the examination of the basic specification of a demonstration FBR, the measures for promoting international cooperation, and other important matters. As the results of investigation, the situation around the development of FBRs, the fundamentals at the time of promoting the research and development, the subjects of the research and development and so on are reported. (Kako, I.)

  11. Advanced methods for BWR transient and stability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, A; Wehle, F; Opel, S; Velten, R [AREVA, AREVA NP, Erlangen (Germany)

    2008-07-01

    The design of advanced Boiling Water Reactor (BWR) fuel assemblies and cores is governed by the basic requirement of safe, reliable and flexible reactor operation with optimal fuel utilization. AREVA NP's comprehensive steady state and transient BWR methodology allows the designer to respond quickly and effectively to customer needs. AREVA NP uses S-RELAP5/RAMONA as the appropriate methodology for the representation of the entire plant. The 3D neutron kinetics and thermal-hydraulics code has been developed for the prediction of system, fuel and core behavior and provides additional margins for normal operation and transients. Of major importance is the extensive validation of the methodology. The validation is based on measurements at AREVA NP's test facilities, and comparison of the predictions with a great wealth of measured data gathered from BWR plants during many years of operation. Three of the main fields of interest are stability analysis, operational transients and reactivity initiated accidents (RIAs). The introduced 3D methodology for operational transients shows significant margin regarding the operational limit of critical power ratio, which has been approved by the German licensing authority. Regarding BWR stability a large number of measurements at different plants under various conditions have been performed and successfully post-calculated with RAMONA. This is the basis of reliable pre-calculations of the locations of regional and core-wide stability boundaries. (authors)

  12. Advanced methods for BWR transient and stability analysis

    International Nuclear Information System (INIS)

    Schmidt, A.; Wehle, F.; Opel, S.; Velten, R.

    2008-01-01

    The design of advanced Boiling Water Reactor (BWR) fuel assemblies and cores is governed by the basic requirement of safe, reliable and flexible reactor operation with optimal fuel utilization. AREVA NP's comprehensive steady state and transient BWR methodology allows the designer to respond quickly and effectively to customer needs. AREVA NP uses S-RELAP5/RAMONA as the appropriate methodology for the representation of the entire plant. The 3D neutron kinetics and thermal-hydraulics code has been developed for the prediction of system, fuel and core behavior and provides additional margins for normal operation and transients. Of major importance is the extensive validation of the methodology. The validation is based on measurements at AREVA NP's test facilities, and comparison of the predictions with a great wealth of measured data gathered from BWR plants during many years of operation. Three of the main fields of interest are stability analysis, operational transients and reactivity initiated accidents (RIAs). The introduced 3D methodology for operational transients shows significant margin regarding the operational limit of critical power ratio, which has been approved by the German licensing authority. Regarding BWR stability a large number of measurements at different plants under various conditions have been performed and successfully post-calculated with RAMONA. This is the basis of reliable pre-calculations of the locations of regional and core-wide stability boundaries. (authors)

  13. Advanced Steel Microstructural Classification by Deep Learning Methods.

    Science.gov (United States)

    Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank

    2018-02-01

    The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.

  14. Advances in methods and applications of reliability and safety analysis

    International Nuclear Information System (INIS)

    Fieandt, J.; Hossi, H.; Laakso, K.; Lyytikaeinen, A.; Niemelae, I.; Pulkkinen, U.; Pulli, T.

    1986-01-01

    The know-how of the reliability and safety design and analysis techniques of Vtt has been established over several years in analyzing the reliability in the Finnish nuclear power plants Loviisa and Olkiluoto. This experience has been later on applied and developed to be used in the process industry, conventional power industry, automation and electronics. VTT develops and transfers methods and tools for reliability and safety analysis to the private and public sectors. The technology transfer takes place in joint development projects with potential users. Several computer-aided methods, such as RELVEC for reliability modelling and analysis, have been developed. The tool developed are today used by major Finnish companies in the fields of automation, nuclear power, shipbuilding and electronics. Development of computer-aided and other methods needed in analysis of operating experience, reliability or safety is further going on in a number of research and development projects

  15. Development and application of advanced methods for electronic structure calculations

    DEFF Research Database (Denmark)

    Schmidt, Per Simmendefeldt

    . For this reason, part of this thesis relates to developing and applying a new method for constructing so-called norm-conserving PAW setups, that are applicable to GW calculations by using a genetic algorithm. The effect of applying the new setups significantly affects the absolute band positions, both for bulk......This thesis relates to improvements and applications of beyond-DFT methods for electronic structure calculations that are applied in computational material science. The improvements are of both technical and principal character. The well-known GW approximation is optimized for accurate calculations...... of electronic excitations in two-dimensional materials by exploiting exact limits of the screened Coulomb potential. This approach reduces the computational time by an order of magnitude, enabling large scale applications. The GW method is further improved by including so-called vertex corrections. This turns...

  16. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    Science.gov (United States)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  17. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  18. Advance in research on aerosol deposition simulation methods

    International Nuclear Information System (INIS)

    Liu Keyang; Li Jingsong

    2011-01-01

    A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)

  19. Advanced evaluation method of SG TSP BEC hole blockage rate

    International Nuclear Information System (INIS)

    Izumida, Hiroyuki; Nagata, Yasuyuki; Harada, Yutaka; Murakami, Ryuji

    2003-01-01

    In spite of the control of the water chemistry of SG secondary feed-water in PWR-SG, SG TSP BEC holes, which are the flow path of secondary water, are often clogged. In the past, the trending of BEC hole blockage rate has conducted by evaluating ECT original signals and visual inspections. However, the ECT original signals of deposits are diversified, it becomes difficult to analyze them with the existing evaluation method using the ECT original signals. In this regard, we have developed the secondary side visual inspection system, which enables the high-accuracy evaluation of BEC hole blockage rate, and new ECT signal evaluation method. (author)

  20. Advanced FDTD methods parallelization, acceleration, and engineering applications

    CERN Document Server

    Yu, Wenhua

    2011-01-01

    The finite-difference time-domain (FDTD) method has revolutionized antenna design and electromagnetics engineering. Here's a cutting-edge book that focuses on the performance optimization and engineering applications of FDTD simulation systems. Covering the latest developments in this area, this unique resource offer you expert advice on the FDTD method, hardware platforms, and network systems. Moreover the book offers guidance in distinguishing between the many different electromagnetics software packages on the market today. You also find a complete chapter dedicated to large multi-scale pro

  1. Advances in computational methods for Quantum Field Theory calculations

    NARCIS (Netherlands)

    Ruijl, B.J.G.

    2017-01-01

    In this work we describe three methods to improve the performance of Quantum Field Theory calculations. First, we simplify large expressions to speed up numerical integrations. Second, we design Forcer, a program for the reduction of four-loop massless propagator integrals. Third, we extend the R*

  2. A CTSA Agenda to Advance Methods for Comparative Effectiveness Research

    Science.gov (United States)

    Helfand, Mark; Tunis, Sean; Whitlock, Evelyn P.; Pauker, Stephen G.; Basu, Anirban; Chilingerian, Jon; Harrell Jr., Frank E.; Meltzer, David O.; Montori, Victor M.; Shepard, Donald S.; Kent, David M.

    2011-01-01

    Abstract Clinical research needs to be more useful to patients, clinicians, and other decision makers. To meet this need, more research should focus on patient‐centered outcomes, compare viable alternatives, and be responsive to individual patients’ preferences, needs, pathobiology, settings, and values. These features, which make comparative effectiveness research (CER) fundamentally patient‐centered, challenge researchers to adopt or develop methods that improve the timeliness, relevance, and practical application of clinical studies. In this paper, we describe 10 priority areas that address 3 critical needs for research on patient‐centered outcomes (PCOR): (1) developing and testing trustworthy methods to identify and prioritize important questions for research; (2) improving the design, conduct, and analysis of clinical research studies; and (3) linking the process and outcomes of actual practice to priorities for research on patient‐centered outcomes. We argue that the National Institutes of Health, through its clinical and translational research program, should accelerate the development and refinement of methods for CER by linking a program of methods research to the broader portfolio of large, prospective clinical and health system studies it supports. Insights generated by this work should be of enormous value to PCORI and to the broad range of organizations that will be funding and implementing CER. Clin Trans Sci 2011; Volume 4: 188–198 PMID:21707950

  3. Origins, Methods and Advances in Qualitative Meta-Synthesis

    Science.gov (United States)

    Nye, Elizabeth; Melendez-Torres, G. J.; Bonell, Chris

    2016-01-01

    Qualitative research is a broad term encompassing many methods. Critiques of the field of qualitative research argue that while individual studies provide rich descriptions and insights, the absence of connections drawn between studies limits their usefulness. In response, qualitative meta-synthesis serves as a design to interpret and synthesise…

  4. Method of public support evaluation for advanced NPP deployment

    International Nuclear Information System (INIS)

    Zezula, L.; Hermansky, B.

    2005-01-01

    Public support of nuclear power could be fully recovered only if the public would, from the very beginning of the new power source selection process, receive transparent information and was made a part of interactive dialogue. The presented method was developed with the objective to facilitate the complex process of the utilities - public interaction. Our method of the public support evaluation allows to classify designs of new nuclear power plants taking into consideration the public attitude to continued nuclear power deployment in the Czech Republic as well as the preference of a certain plant design. The method is based on the model with a set of probabilistic input metrics, which permits to compare the offered concepts with the reference one, with a high degree of objectivity. This method is a part of the more complex evaluation procedure applicable for the new designs assessment that uses the computer code ''Potencial'' developed at the NRI Rez plc. The metrics of the established public support criteria are discussed. (author)

  5. New advanced in alpha spectrometry by liquid scintillation methods

    International Nuclear Information System (INIS)

    McDowell, W.J.; Case, G.N.

    1979-01-01

    Although the ability to count alpha particles by liquid scintillation methods has been long recognized, limited use has been made of the method because of problems of high background and alpha energy identification. In recent years some new developments in methods of introducing the alpha-emitting nuclide to the scintillator, in detector construction, and in electronics for processing the energy analog and time analog signals from the detector have allowed significant alleviation of the problems of alpha spectrometry by liquid scintillation. Energy resolutions of 200 to 300 keV full peak width at half maximum and background counts of 99% of all beta plus gamma interference is now possible. Alpha liquid scintillation spectrometry is now suitable for a wide range of applications, from the accurate quantitative determination of relatively large amounts of known nuclides in laboratory-generated samples to the detection and identification of very small, subpicocurie amounts of alpha emitters in environmental-type samples. Suitable nuclide separation procedures, sample preparation methods, and instrument configurations are available for a variety of analyses

  6. Advanced methods in evaluation of thermal power systems effectiveness

    International Nuclear Information System (INIS)

    Barnak, N.; Jakubcek, P.; Zadrazil, J.

    1993-01-01

    The universal method for thermodynamic systems process irreversibility evaluation based on exergetic approach is elaborated in this article. The method uses the basic property of exergy as extensive state parameter -additivity. Division of the system onto some hierarchic levels is considered and relation between exergetic system characteristics and its parts is defined. There are system structure coefficients in common form expressed article they are analysed. The criteria for technical and economical optimization of the system using expressed structure coefficients are defined. In the article, there are common approaches defined for the method application in the area of nuclear power plant secondary circuits and the method is used for nuclear power plant WWER-1000 secondary circuit analysis. For this, individual exergetic characteristics of secondary circuit and its parts are expressed and some of secondary circuit parameters are optimized. Proposals for practical realisation of the results are stated in the conclusions of the article, mainly in the area of computerized evaluation of technical and economical parameters of nuclear power plant and effectiveness of its operation

  7. Advanced hydraulic fracturing methods to create in situ reactive barriers

    International Nuclear Information System (INIS)

    Murdoch, L.

    1997-01-01

    This article describes the use of hydraulic fracturing to increase permeability in geologic formations where in-situ remedial action of contaminant plumes will be performed. Several in-situ treatment strategies are discussed including the use of hydraulic fracturing to create in situ redox zones for treatment of organics and inorganics. Hydraulic fracturing methods offer a mechanism for the in-situ treatment of gently dipping layers of reactive compounds. Specialized methods using real-time monitoring and a high-energy jet during fracturing allow the form of the fracture to be influenced, such as creation of assymmetric fractures beneath potential sources (i.e. tanks, pits, buildings) that should not be penetrated by boring. Some examples of field applications of this technique such as creating fractures filled with zero-valent iron to reductively dechlorinate halogenated hydrocarbons, and the use of granular activated carbon to adsorb compounds are discussed

  8. Advanced Numerical and Theoretical Methods for Photonic Crystals and Metamaterials

    Science.gov (United States)

    Felbacq, Didier

    2016-11-01

    This book provides a set of theoretical and numerical tools useful for the study of wave propagation in metamaterials and photonic crystals. While concentrating on electromagnetic waves, most of the material can be used for acoustic (or quantum) waves. For each presented numerical method, numerical code written in MATLAB® is presented. The codes are limited to 2D problems and can be easily translated in Python or Scilab, and used directly with Octave as well.

  9. Advanced methods for scattering amplitudes in gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Peraro, Tiziano

    2014-09-24

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  10. Advanced methods for scattering amplitudes in gauge theories

    International Nuclear Information System (INIS)

    Peraro, Tiziano

    2014-01-01

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  11. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  12. New trends and advanced methods in interdisciplinary mathematical sciences

    CERN Document Server

    2017-01-01

    The latest of five multidisciplinary volumes, this book spans the STEAM-H (Science, Technology, Engineering, Agriculture, Mathematics, and Health) disciplines with the intent to generate meaningful interdisciplinary interaction and student interest. Emphasis is placed on important methods and applications within and beyond each field. Topics include geometric triple systems, image segmentation, pattern recognition in medicine, pricing barrier options, p-adic numbers distribution in geophysics data pattern, adelic physics, and evolutionary game theory. Contributions were by invitation only and peer-reviewed. Each chapter is reasonably self-contained and pedagogically presented for a multidisciplinary readership.

  13. Analysis advanced methods of data bases of industrial experience return

    International Nuclear Information System (INIS)

    Lannoy, A.; Procaccia, H.

    1994-05-01

    This is a presentation, through different conceptions of data bases on industrial experience return, of the principal methods for treatments and analyses of the collected data, going from the frequency statistic and factorial analysis, to the the Bayesian statistical decision theory, which is a real decision assistance tool for responsibles, conceivers and operators. Examples in various fields are given (OREDA: Offshore REliability DAta bank for marine drilling platforms, CEDB: Component Event Data Bank for european electric power industry, RDF 93: reliability of electronic components of ''France Telecom'', EVT: failure EVenTs data bank in the french nuclear power plants by ''EDF''). (A.B.). refs., figs., tabs

  14. Advanced Control Methods for Optimization of Arc Welding

    DEFF Research Database (Denmark)

    Thomsen, J. S.

    Gas Metal Arc Welding (GMAW) is a proces used for joining pieces of metal. Probably, the GMAW process is the most successful and widely used welding method in the industry today. A key issue in welding is the quality of the welds produced. The quality of a weld is influenced by several factors...... in the overall welding process; one of these factors are the ability of the welding machine to control the process. The internal control algorithms in GMAW machines are the topic of this PhD project. Basically, the internal control includes an algorithm which is able to keep the electrode at a given distance...

  15. Recent Advances in Conotoxin Classification by Using Machine Learning Methods.

    Science.gov (United States)

    Dao, Fu-Ying; Yang, Hui; Su, Zhen-Dong; Yang, Wuritu; Wu, Yun; Hui, Ding; Chen, Wei; Tang, Hua; Lin, Hao

    2017-06-25

    Conotoxins are disulfide-rich small peptides, which are invaluable peptides that target ion channel and neuronal receptors. Conotoxins have been demonstrated as potent pharmaceuticals in the treatment of a series of diseases, such as Alzheimer's disease, Parkinson's disease, and epilepsy. In addition, conotoxins are also ideal molecular templates for the development of new drug lead compounds and play important roles in neurobiological research as well. Thus, the accurate identification of conotoxin types will provide key clues for the biological research and clinical medicine. Generally, conotoxin types are confirmed when their sequence, structure, and function are experimentally validated. However, it is time-consuming and costly to acquire the structure and function information by using biochemical experiments. Therefore, it is important to develop computational tools for efficiently and effectively recognizing conotoxin types based on sequence information. In this work, we reviewed the current progress in computational identification of conotoxins in the following aspects: (i) construction of benchmark dataset; (ii) strategies for extracting sequence features; (iii) feature selection techniques; (iv) machine learning methods for classifying conotoxins; (v) the results obtained by these methods and the published tools; and (vi) future perspectives on conotoxin classification. The paper provides the basis for in-depth study of conotoxins and drug therapy research.

  16. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-01

    research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing

  17. Advanced Materials Test Methods for Improved Life Prediction of Turbine Engine Components

    National Research Council Canada - National Science Library

    Stubbs, Jack

    2000-01-01

    Phase I final report developed under SBIR contract for Topic # AF00-149, "Durability of Turbine Engine Materials/Advanced Material Test Methods for Improved Use Prediction of Turbine Engine Components...

  18. Advances in Modal Analysis Using a Robust and Multiscale Method

    Science.gov (United States)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  19. Advanced methods and algorithm for high precision astronomical imaging

    International Nuclear Information System (INIS)

    Ngole-Mboula, Fred-Maurice

    2016-01-01

    One of the biggest challenges of modern cosmology is to gain a more precise knowledge of the dark energy and the dark matter nature. Fortunately, the dark matter can be traced directly through its gravitational effect on galaxies shapes. The European Spatial Agency Euclid mission will precisely provide data for such a purpose. A critical step is analyzing these data will be to accurately model the instrument Point Spread Function (PSF), which the focus of this thesis.We developed non parametric methods to reliably estimate the PSFs across an instrument field-of-view, based on unresolved stars images and accounting for noise, under sampling and PSFs spatial variability. At the core of these contributions, modern mathematical tools and concepts such as sparsity. An important extension of this work will be to account for the PSFs wavelength dependency. (author) [fr

  20. Advances in Modal Analysis Using a Robust and Multiscale Method

    Directory of Open Access Journals (Sweden)

    Frisson Christian

    2010-01-01

    Full Text Available Abstract This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  1. Comparative Assessment of Advanced Gay Hydrate Production Methods

    Energy Technology Data Exchange (ETDEWEB)

    M. D. White; B. P. McGrail; S. K. Wurstner

    2009-06-30

    Displacing natural gas and petroleum with carbon dioxide is a proven technology for producing conventional geologic hydrocarbon reservoirs, and producing additional yields from abandoned or partially produced petroleum reservoirs. Extending this concept to natural gas hydrate production offers the potential to enhance gas hydrate recovery with concomitant permanent geologic sequestration. Numerical simulation was used to assess a suite of carbon dioxide injection techniques for producing gas hydrates from a variety of geologic deposit types. Secondary hydrate formation was found to inhibit contact of the injected CO{sub 2} regardless of injectate phase state, thus diminishing the exchange rate due to pore clogging and hydrate zone bypass of the injected fluids. Additional work is needed to develop methods of artificially introducing high-permeability pathways in gas hydrate zones if injection of CO{sub 2} in either gas, liquid, or micro-emulsion form is to be more effective in enhancing gas hydrate production rates.

  2. NATO Advanced Study Institute on Evolving Methods for Macromolecular Gystallography

    CERN Document Server

    Read, Randy J

    2007-01-01

    X-ray crystallography is the pre-eminent technique for visualizing the structures of macromolecules at atomic resolution. These structures are central to understanding the detailed mechanisms of biological processes, and to discovering novel therapeutics using a structure-based approach. As yet, structures are known for only a small fraction of the proteins encoded by human and pathogenic genomes. To counter the myriad modern threats of disease, there is an urgent need to determine the structures of the thousands of proteins whose structure and function remain unknown. This volume draws on the expertise of leaders in the field of macromolecular crystallography to illuminate the dramatic developments that are accelerating progress in structural biology. Their contributions span the range of techniques from crystallization through data collection, structure solution and analysis, and show how modern high-throughput methods are contributing to a deeper understanding of medical problems.

  3. Advanced hydraulic fracturing methods to create in situ reactive barriers

    International Nuclear Information System (INIS)

    Murdoch, L.; Siegrist, B.; Vesper, S.

    1997-01-01

    Many contaminated areas consist of a source area and a plume. In the source area, the contaminant moves vertically downward from a release point through the vadose zone to an underlying saturated region. Where contaminants are organic liquids, NAPL may accumulate on the water table, or it may continue to migrate downward through the saturated region. Early developments of permeable barrier technology have focused on intercepting horizontally moving plumes with vertical structures, such as trenches, filled with reactive material capable of immobilizing or degrading dissolved contaminants. This focus resulted in part from a need to economically treat the potentially large volumes of contaminated water in a plume, and in part from the availability of construction technology to create the vertical structures that could house reactive compounds. Contaminant source areas, however, have thus far remained largely excluded from the application of permeable barrier technology. One reason for this is the lack of conventional construction methods for creating suitable horizontal structures that would place reactive materials in the path of downward-moving contaminants. Methods of hydraulic fracturing have been widely used to create flat-lying to gently dipping layers of granular material in unconsolidated sediments. Most applications thus far have involved filling fractures with coarse-grained sand to create permeable layers that will increase the discharge of wells recovering contaminated water or vapor. However, it is possible to fill fractures with other compounds that alter the chemical composition of the subsurface. One early application involved development and field testing micro-encapsulated sodium percarbonate, a solid compound that releases oxygen and can create aerobic conditions suitable for biodegradation in the subsurface for several months

  4. Steam leak detection in advance reactors via acoustics method

    International Nuclear Information System (INIS)

    Singh, Raj Kumar; Rao, A. Rama

    2011-01-01

    Highlights: → Steam leak detection system is developed to detect any leak inside the reactor vault. → The technique uses leak noise frequency spectrum for leak detection. → Testing of system and method to locate the leak is also developed and discussed in present paper. - Abstract: Prediction of LOCA (loss of coolant activity) plays very important role in safety of nuclear reactor. Coolant is responsible for heat transfer from fuel bundles. Loss of coolant is an accidental situation which requires immediate shut down of reactor. Fall in system pressure during LOCA is the trip parameter used for initiating automatic reactor shut down. However, in primary heat transport system operating in two phase regimes, detection of small break LOCA is not simple. Due to very slow leak rates, time for the fall of pressure is significantly slow. From reactor safety point of view, it is extremely important to find reliable and effective alternative for detecting slow pressure drop in case of small break LOCA. One such technique is the acoustic signal caused by LOCA in small breaks. In boiling water reactors whose primary heat transport is to be driven by natural circulation, small break LOCA detection is important. For prompt action on post small break LOCA, steam leak detection system is developed to detect any leak inside the reactor vault. The detection technique is reliable and plays a very important role in ensuring safety of the reactor. Methodology developed for steam leak detection is discussed in present paper. The methods to locate the leak is also developed and discussed in present paper which is based on analysis of the signal.

  5. Advanced communication methods developed for nuclear data communication applications

    International Nuclear Information System (INIS)

    Tiwari, Akash; Tiwari, Railesha; Tiwari, S.S.; Panday, Lokesh; Suri, Nitin; Takle, Tarun Rao; Jain, Sanjeev; Gupta, Rishi; Sharma, Dipeeka; Takle, Rahul Rao; Gautam, Rajeev; Bhargava, Vishal; Arora, Himanshu; Agarwal, Ankur; Rupesh; Chawla, Mohit; Sethi, Amardeep Singh; Gupta, Mukesh; Gupta, Ankit; Verma, Neha; Sood, Nitin; Singh, Sunil; Agarwal, Chandresh

    2004-01-01

    We conducted various experiments and tested data communications methods that may be useful for various applications in nuclear industries. We explored the following areas. I. Scientific data communication among scientists within the laboratory and inter-laboratory data exchange. 2.Data from sensors from remote and wired sensors. 3.Data from multiple sensors with small zone. 4.Data from single or multiple sensors from distances above 100 m and less than 10 km. No any single data communication method was found to be the best solution for nuclear applications and multiple modes of communication were found to be advantageous than any single mode of data communication. Network of computers in the control room and in between laboratories connected with optical fiber or an isolated Ethernet coaxial LAN was found to be optimum. Information from multiple analog process sensors in smaller zones like reactor building and laboratories on 12C LAN and short-range wireless LAN were found to be advantageous. Within the laboratory sensor data network of 12C was found to be cost effective and wireless LAN was comparatively expansive. Within a room infrared optical LAN and FSK wireless LAN were found to be highly useful in making the sensors free from wires. Direct sensor interface on FSK wireless link were found to be fast accurate, cost effective over large distance data communication. Such links are the only way to communicate from sea boy and balloons hardware. 1-wire communication network of Dallas Semiconductor USA for weather station data communication Computer to computer communication using optical LAN links has been tried, temperature pressure, humidity, ionizing radiation, generator RPM and voltage and various other analog signals were also transported o FSK optical and wireless links. Multiple sensors needed a dedicated data acquisition system and wireless LAN for data telemetry. (author)

  6. Statistical methods of discrimination and classification advances in theory and applications

    CERN Document Server

    Choi, Sung C

    1986-01-01

    Statistical Methods of Discrimination and Classification: Advances in Theory and Applications is a collection of papers that tackles the multivariate problems of discriminating and classifying subjects into exclusive population. The book presents 13 papers that cover that advancement in the statistical procedure of discriminating and classifying. The studies in the text primarily focus on various methods of discriminating and classifying variables, such as multiple discriminant analysis in the presence of mixed continuous and categorical data; choice of the smoothing parameter and efficiency o

  7. MAESTRO: Methods and Advanced Equipment for Simulation and Treatment in Radio-Oncology

    Science.gov (United States)

    Barthe, Jean; Hugon, Régis; Nicolai, Jean Philippe

    2007-12-01

    The integrated project MAESTRO (Methods and Advanced Equipment for Simulation and Treatment in Radio-Oncology) under contract with the European Commission in life sciences FP6 (LSHC-CT-2004-503564), concerns innovative research to develop and validate in clinical conditions, advanced methods and equipment needed in cancer treatment for new modalities in high-conformal external radiotherapy using electrons, photons and protons beams of high energy.

  8. Advanced cluster methods for correlated-electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Andre

    2015-04-27

    In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult

  9. An evolutionary method for synthesizing technological planning and architectural advance

    Science.gov (United States)

    Cole, Bjorn Forstrom

    In the development of systems with ever-increasing performance and/or decreasing drawbacks, there inevitably comes a point where more progress is available by shifting to a new set of principles of use. This shift marks a change in architecture, such as between the piston-driven propeller and the jet engine. The shift also often involves an abandonment of previous competencies that have been developed with great effort, and so a foreknowledge of these shifts can be advantageous. A further motivation for this work is the consideration of the Micro Autonomous Systems and Technology (MAST) project, which aims to develop very small (thesis provide context and a philosophical background to the studies and research that was conducted. In particular, the idea that technology progresses in a fundamentally gradual way is developed and supported with previous historical research. The import of this is that the future can to some degree be predicted by the past, provided that the appropriate technological antecedents are accounted for in developing the projection. The third chapter of the thesis compiles a series of observations and philosophical considerations into a series of research questions. Some research questions are then answered with further thought, observation, and reading, leading to conjectures on the problem. The remainder require some form of experimentation, and so are used to formulate hypotheses. Falsifiability conditions are then generated from those hypotheses, and used to get the development of experiments to be performed, in this case on a computer upon various conditions of use of a genetic algorithm. The fourth chapter of the thesis walks through the formulation of a method to attack the problem of strategically choosing an architecture. This method is designed to find the optimum architecture under multiple conditions, which is required for the ability to play the "what if" games typically undertaken in strategic situations. The chapter walks through

  10. Advanced Monte Carlo methods for thermal radiation transport

    Science.gov (United States)

    Wollaber, Allan B.

    During the past 35 years, the Implicit Monte Carlo (IMC) method proposed by Fleck and Cummings has been the standard Monte Carlo approach to solving the thermal radiative transfer (TRT) equations. However, the IMC equations are known to have accuracy limitations that can produce unphysical solutions. In this thesis, we explicitly provide the IMC equations with a Monte Carlo interpretation by including particle weight as one of its arguments. We also develop and test a stability theory for the 1-D, gray IMC equations applied to a nonlinear problem. We demonstrate that the worst case occurs for 0-D problems, and we extend the results to a stability algorithm that may be used for general linearizations of the TRT equations. We derive gray, Quasidiffusion equations that may be deterministically solved in conjunction with IMC to obtain an inexpensive, accurate estimate of the temperature at the end of the time step. We then define an average temperature T* to evaluate the temperature-dependent problem data in IMC, and we demonstrate that using T* is more accurate than using the (traditional) beginning-of-time-step temperature. We also propose an accuracy enhancement to the IMC equations: the use of a time-dependent "Fleck factor". This Fleck factor can be considered an automatic tuning of the traditionally defined user parameter alpha, which generally provides more accurate solutions at an increased cost relative to traditional IMC. We also introduce a global weight window that is proportional to the forward scalar intensity calculated by the Quasidiffusion method. This weight window improves the efficiency of the IMC calculation while conserving energy. All of the proposed enhancements are tested in 1-D gray and frequency-dependent problems. These enhancements do not unconditionally eliminate the unphysical behavior that can be seen in the IMC calculations. However, for fixed spatial and temporal grids, they suppress them and clearly work to make the solution more

  11. Advanced methods in NDE using machine learning approaches

    Science.gov (United States)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability

  12. Advanced fabrication method for the preparation of MOF thin films: Liquid-phase epitaxy approach meets spin coating method.

    KAUST Repository

    Chernikova, Valeriya; Shekhah, Osama; Eddaoudi, Mohamed

    2016-01-01

    Here we report a new and advanced method for the fabrication of highly oriented/polycrystalline metal-organic framework (MOF) thin films. Building on the attractive features of the liquid-phase epitaxy (LPE) approach, a facile spin coating method

  13. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  14. Development of a HRA method based on Human Factor Issues for advanced NPP

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Seong, Poong Hyun; Ha, Jun Su; Park, Jae Hyuk; Kim, Ja Kyung

    2010-01-01

    A design of instrumentation and control (I and C) systems for various plant systems including nuclear power plants (NPPs) is rapidly moving toward fully digital I and C and modern computer techniques have been gradually introduced into the design of advanced main control room (MCR). In advanced MCR, computer based Human-System Interfaces (HSIs) such as CRT based displays, large display panels (LDP), advanced information system, soft control and computerized procedure system (CPS) are applied in advanced MCR. Human operators in an advanced MCR still play an important role. However, various research and experiences from NPPs with an advanced MCR show that characteristics of human operators' task would be changed due to the use of inexperienced HSIs. This gives implications to the PSFs (Performance Shaping Factors) in HRA (Human Reliability Analysis). PSF in HRA is an aspect of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance resulting in increasing or decreasing the likelihood of human error. These PSFs have been suggested in various ways depending on the HRA methods used. In most HRA methods, however, there is a lack of inconsistency for the derivation of the PSFs and a lack of considerations of how the changes implemented in advanced MCR give impact on the operators' task. In this study, a framework for the derivation of and evaluation in the PSFs to be used in HRA for advanced NPPs is suggested

  15. Advanced Extraction Methods for Actinide/Lanthanide Separations

    International Nuclear Information System (INIS)

    Scott, M.J.

    2005-01-01

    The separation of An(III) ions from chemically similar Ln(III) ions is perhaps one of the most difficult problems encountered during the processing of nuclear waste. In the 3+ oxidation states, the metal ions have an identical charge and roughly the same ionic radius. They differ strictly in the relative energies of their f- and d-orbitals, and to separate these metal ions, ligands will need to be developed that take advantage of this small but important distinction. The extraction of uranium and plutonium from nitric acid solution can be performed quantitatively by the extraction with the TBP (tributyl phosphate). Commercially, this process has found wide use in the PUREX (plutonium uranium extraction) reprocessing method. The TRUEX (transuranium extraction) process is further used to coextract the trivalent lanthanides and actinides ions from HLLW generated during PUREX extraction. This method uses CMPO [(N, N-diisobutylcarbamoylmethyl) octylphenylphosphineoxide] intermixed with TBP as a synergistic agent. However, the final separation of trivalent actinides from trivalent lanthanides still remains a challenging task. In TRUEX nitric acid solution, the Am(III) ion is coordinated by three CMPO molecules and three nitrate anions. Taking inspiration from this data and previous work with calix[4]arene systems, researchers on this project have developed a C3-symmetric tris-CMPO ligand system using a triphenoxymethane platform as a base. The triphenoxymethane ligand systems have many advantages for the preparation of complex ligand systems. The compounds are very easy to prepare. The steric and solubility properties can be tuned through an extreme range by the inclusion of different alkoxy and alkyl groups such as methyoxy, ethoxy, t-butoxy, methyl, octyl, t-pentyl, or even t-pentyl at the ortho- and para-positions of the aryl rings. The triphenoxymethane ligand system shows promise as an improved extractant for both tetravalent and trivalent actinide recoveries form

  16. Advanced Extraction Methods for Actinide/Lanthanide Separations

    Energy Technology Data Exchange (ETDEWEB)

    Scott, M.J.

    2005-12-01

    The separation of An(III) ions from chemically similar Ln(III) ions is perhaps one of the most difficult problems encountered during the processing of nuclear waste. In the 3+ oxidation states, the metal ions have an identical charge and roughly the same ionic radius. They differ strictly in the relative energies of their f- and d-orbitals, and to separate these metal ions, ligands will need to be developed that take advantage of this small but important distinction. The extraction of uranium and plutonium from nitric acid solution can be performed quantitatively by the extraction with the TBP (tributyl phosphate). Commercially, this process has found wide use in the PUREX (plutonium uranium extraction) reprocessing method. The TRUEX (transuranium extraction) process is further used to coextract the trivalent lanthanides and actinides ions from HLLW generated during PUREX extraction. This method uses CMPO [(N, N-diisobutylcarbamoylmethyl) octylphenylphosphineoxide] intermixed with TBP as a synergistic agent. However, the final separation of trivalent actinides from trivalent lanthanides still remains a challenging task. In TRUEX nitric acid solution, the Am(III) ion is coordinated by three CMPO molecules and three nitrate anions. Taking inspiration from this data and previous work with calix[4]arene systems, researchers on this project have developed a C3-symmetric tris-CMPO ligand system using a triphenoxymethane platform as a base. The triphenoxymethane ligand systems have many advantages for the preparation of complex ligand systems. The compounds are very easy to prepare. The steric and solubility properties can be tuned through an extreme range by the inclusion of different alkoxy and alkyl groups such as methyoxy, ethoxy, t-butoxy, methyl, octyl, t-pentyl, or even t-pentyl at the ortho- and para-positions of the aryl rings. The triphenoxymethane ligand system shows promise as an improved extractant for both tetravalent and trivalent actinide recoveries form

  17. A modified captive bubble method for determining advancing and receding contact angles

    International Nuclear Information System (INIS)

    Xue, Jian; Shi, Pan; Zhu, Lin; Ding, Jianfu; Chen, Qingmin; Wang, Qingjun

    2014-01-01

    Graphical abstract: - Highlights: • A modified captive bubble method for determining advancing and receding contact angle is proposed. • We have designed a pressure chamber with a pressure control system to the original experimental. • The modified method overcomes the deviation of the bubble in the traditional captive bubble method. • The modified captive bubble method allows a smaller error from the test. - Abstract: In this work, a modification to the captive bubble method was proposed to test the advancing and receding contact angle. This modification is done by adding a pressure chamber with a pressure control system to the original experimental system equipped with an optical angle mater equipped with a high speed CCD camera, a temperature control system and a computer. A series of samples with highly hydrophilic, hydrophilic, hydrophobic and superhydrophobic surfaces were prepared. The advancing and receding contact angles of these samples with highly hydrophilic, hydrophilic, and hydrophobic surfaces through the new methods was comparable to the result tested by the traditional sessile drop method. It is proved that this method overcomes the limitation of the traditional captive bubble method and the modified captive bubble method allows a smaller error from the test. However, due to the nature of the captive bubble technique, this method is also only suitable for testing the surface with advancing or receding contact angle below 130°

  18. A modified captive bubble method for determining advancing and receding contact angles

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Jian; Shi, Pan; Zhu, Lin [Key Laboratory of High Performance Polymer Materials and Technology (Nanjing University), Ministry of Eduction, Nanjing 210093 (China); Ding, Jianfu [Security and Disruptive Technologies, National Research Council Canada, 1200 Montreal Road, Ottawa, K1A 0R6, Ontario (Canada); Chen, Qingmin [Key Laboratory of High Performance Polymer Materials and Technology (Nanjing University), Ministry of Eduction, Nanjing 210093 (China); Wang, Qingjun, E-mail: njuwqj@nju.edu.cn [Key Laboratory of High Performance Polymer Materials and Technology (Nanjing University), Ministry of Eduction, Nanjing 210093 (China)

    2014-03-01

    Graphical abstract: - Highlights: • A modified captive bubble method for determining advancing and receding contact angle is proposed. • We have designed a pressure chamber with a pressure control system to the original experimental. • The modified method overcomes the deviation of the bubble in the traditional captive bubble method. • The modified captive bubble method allows a smaller error from the test. - Abstract: In this work, a modification to the captive bubble method was proposed to test the advancing and receding contact angle. This modification is done by adding a pressure chamber with a pressure control system to the original experimental system equipped with an optical angle mater equipped with a high speed CCD camera, a temperature control system and a computer. A series of samples with highly hydrophilic, hydrophilic, hydrophobic and superhydrophobic surfaces were prepared. The advancing and receding contact angles of these samples with highly hydrophilic, hydrophilic, and hydrophobic surfaces through the new methods was comparable to the result tested by the traditional sessile drop method. It is proved that this method overcomes the limitation of the traditional captive bubble method and the modified captive bubble method allows a smaller error from the test. However, due to the nature of the captive bubble technique, this method is also only suitable for testing the surface with advancing or receding contact angle below 130°.

  19. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  20. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...

  1. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  2. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  3. 78 FR 16513 - Application of Advances in Nucleic Acid and Protein Based Detection Methods to Multiplex...

    Science.gov (United States)

    2013-03-15

    ... Methods to Multiplex Detection of Transfusion- Transmissible Agents and Blood Cell Antigens in Blood... Transfusion-Transmissible Agents and Blood Cell Antigens in Blood Donations; Public Workshop AGENCY: Food and... technological advances in gene based and protein based pathogen and blood cell antigen detection methods and to...

  4. Iterative Method of Regularization with Application of Advanced Technique for Detection of Contours

    International Nuclear Information System (INIS)

    Niedziela, T.; Stankiewicz, A.

    2000-01-01

    This paper proposes a novel iterative method of regularization with application of an advanced technique for detection of contours. To eliminate noises, the properties of convolution of functions are utilized. The method can be accomplished in a simple neural cellular network, which creates the possibility of extraction of contours by automatic image recognition equipment. (author)

  5. System and method to control h2o2 level in advanced oxidation processes

    DEFF Research Database (Denmark)

    2016-01-01

    The present invention relates to a bio-electrochemical system (BES) and a method of in-situ production and removal of H2O2 using such a bio-electrochemical system (BES). Further, the invention relates to a method for in-situ control of H2O2 content in an aqueous system of advanced oxidation...

  6. Advanced microscopic methods for the detection of adhesion barriers in immunology in medical imaging

    Science.gov (United States)

    Lawrence, Shane

    2017-07-01

    Advanced methods of microscopy and advanced techniques of analysis stemming therefrom have developed greatly in the past few years.The use of single discrete methods has given way to the combination of methods which means an increase in data for processing to progress to the analysis and diagnosis of ailments and diseases which can be viewed by each and any method.This presentation shows the combination of such methods and gives example of the data which arises from each individual method and the combined methodology and suggests how such data can be streamlined to enable conclusions to be drawn about the particular biological and biochemical considerations that arise.In this particular project the subject of the methodology was human lactoferrin and the relation of the adhesion properties of hlf in the overcoming of barriers to adhesion mainly on the perimeter of the cellular unit and how this affects the process of immunity in any particular case.

  7. Setting health research priorities using the CHNRI method: IV. Key conceptual advances

    Directory of Open Access Journals (Sweden)

    Igor Rudan

    2016-06-01

    Full Text Available Child Health and Nutrition Research Initiative (CHNRI started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007–2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances.

  8. FAA/NASA International Symposium on Advanced Structural Integrity Methods for Airframe Durability and Damage Tolerance

    Energy Technology Data Exchange (ETDEWEB)

    Harris, C.E.

    1994-09-01

    International technical experts in durability and damage tolerance of metallic airframe structures were assembled to present and discuss recent research findings and the development of advanced design and analysis methods, structural concepts, and advanced materials. The symposium focused on the dissemination of new knowledge and the peer-review of progress on the development of advanced methodologies. Papers were presented on: structural concepts for enhanced durability, damage tolerance, and maintainability; new metallic alloys and processing technology; fatigue crack initiation and small crack effects; fatigue crack growth models; fracture mechanics failure, criteria for ductile materials; structural mechanics methodology for residual strength and life prediction; development of flight load spectra for design and testing; and advanced approaches to resist corrosion and environmentally assisted fatigue. Separate abstracts have been indexed for articles from this report.

  9. Some advanced parametric methods for assessing waveform distortion in a smart grid with renewable generation

    Science.gov (United States)

    Alfieri, Luisa

    2015-12-01

    Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.

  10. Human-system safety methods for development of advanced air traffic management systems

    International Nuclear Information System (INIS)

    Nelson, William R.

    1999-01-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) is supporting the National Aeronautics and Space Administration in the development of advanced air traffic management (ATM) systems as part of the Advanced Air Transportation Technologies program. As part of this program INEEL conducted a survey of human-system safety methods that have been applied to complex technical systems, to identify lessons learned from these applications and provide recommendations for the development of advanced ATM systems. The domains that were surveyed included offshore oil and gas, commercial nuclear power, commercial aviation, and military. The survey showed that widely different approaches are used in these industries, and that the methods used range from very high-level, qualitative approaches to very detailed quantitative methods such as human reliability analysis (HRA) and probabilistic safety assessment (PSA). In addition, the industries varied widely in how effectively they incorporate human-system safety assessment in the design, development, and testing of complex technical systems. In spite of the lack of uniformity in the approaches and methods used, it was found that methods are available that can be combined and adapted to support the development of advanced air traffic management systems (author) (ml)

  11. Thermodynamic and economic evaluations of a geothermal district heating system using advanced exergy-based methods

    International Nuclear Information System (INIS)

    Tan, Mehmet; Keçebaş, Ali

    2014-01-01

    Highlights: • Evaluation of a GDHS using advanced exergy-based methods. • Comparison of the results of the conventional and advanced exergy-based methods. • The modified exergetic efficiency and exergoeconomic factor are found as 45% and 13%. • Improvement and total cost-savings potentials are found to be 3% and 14%. • All the pumps have the highest improvement potential and total cost-savings potential. - Abstract: In this paper, a geothermal district heating system (GDHS) is comparatively evaluated in terms of thermodynamic and economic aspects using advanced exergy-based methods to identify the potential for improvement, the interactions among system components, and the direction and potential for energy savings. The actual operational data are taken from the Sarayköy GDHS, Turkey. In the advanced exergetic and exergoeconomic analyses, the exergy destruction and the total operating cost within each component of the system are split into endogenous/exogenous and unavoidable/avoidable parts. The advantages of these analyses over conventional ones are demonstrated. The results indicate that the advanced exergy-based method is a more meaningful and effective tool than the conventional one for system performance evaluation. The exergetic efficiency and the exergoeconomic factor of the overall system for the Sarayköy GDHS were determined to be 43.72% and 5.25% according to the conventional tools and 45.06% and 12.98% according to the advanced tools. The improvement potential and the total cost-savings potential of the overall system were also determined to be 2.98% and 14.05%, respectively. All of the pumps have the highest improvement potential and total cost-savings potential because the pumps were selected to have high power during installation at the Sarayköy GDHS

  12. Functional efficiency comparison between split- and parallel-hybrid using advanced energy flow analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Guttenberg, Philipp; Lin, Mengyan [Romax Technology, Nottingham (United Kingdom)

    2009-07-01

    The following paper presents a comparative efficiency analysis of the Toyota Prius versus the Honda Insight using advanced Energy Flow Analysis methods. The sample study shows that even very different hybrid concepts like a split- and a parallel-hybrid can be compared in a high level of detail and demonstrates the benefit showing exemplary results. (orig.)

  13. Advanced Semi-Implicit Method (ASIM) for hyperbolic two-fluid model

    International Nuclear Information System (INIS)

    Lee, Sung Jae; Chung, Moon Sun

    2003-01-01

    Introducing the interfacial pressure jump terms based on the surface tension into the momentum equations of two-phase two-fluid model, the system of governing equations is turned mathematically into the hyperbolic system. The eigenvalues of the equation system become always real representing the void wave and the pressure wave propagation speeds as shown in the previous manuscript. To solve the interfacial pressure jump terms with void fraction gradients implicitly, the conventional semi-implicit method should be modified as an intermediate iteration method for void fraction at fractional time step. This Advanced Semi-Implicit Method (ASIM) then becomes stable without conventional additive terms. As a consequence, including the interfacial pressure jump terms with the advanced semi-implicit method, the numerical solutions of typical two-phase problems can be more stable and sound than those calculated exclusively by using any other terms like virtual mass, or artificial viscosity

  14. Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment

    Science.gov (United States)

    Rowell, Lawrence F.; Korte, John J.

    2003-01-01

    NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.

  15. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    Science.gov (United States)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  16. Advanced methods for a probabilistic safety analysis of fires. Development of advanced methods for performing as far as possible realistic plant specific fire risk analysis (fire PSA)

    International Nuclear Information System (INIS)

    Hofer, E.; Roewekamp, M.; Tuerschmann, M.

    2003-07-01

    In the frame of the research project RS 1112 'Development of Methods for a Recent Probabilistic Safety Analysis, Particularly Level 2' funded by the German Federal Ministry of Economics and Technology (BMWi), advanced methods, in particular for performing as far as possible realistic plant specific fire risk analyses (fire PSA), should be developed. The present Technical Report gives an overview on the methodologies developed in this context for assessing the fire hazard. In the context of developing advanced methodologies for fire PSA, a probabilistic dynamics analysis with a fire simulation code including an uncertainty and sensitivity study has been performed for an exemplary scenario of a cable fire induced by an electric cabinet inside the containment of a modern Konvoi type German nuclear power plant taking into consideration the effects of fire detection and fire extinguishing means. With the present study, it was possible for the first time to determine the probabilities of specified fire effects from a class of fire events by means of probabilistic dynamics supplemented by uncertainty and sensitivity analyses. The analysis applies a deterministic dynamics model, consisting of a dynamic fire simulation code and a model of countermeasures, considering effects of the stochastics (so-called aleatory uncertainties) as well as uncertainties in the state of knowledge (so-called epistemic uncertainties). By this means, probability assessments including uncertainties are provided to be used within the PSA. (orig.) [de

  17. Elementary and advanced Lie algebraic methods with applications to accelerator design, electron microscopes, and light optics

    International Nuclear Information System (INIS)

    Dragt, A.J.

    1987-01-01

    A review is given of elementary Lie algebraic methods for treating Hamiltonian systems. This review is followed by a brief exposition of advanced Lie algebraic methods including resonance bases and conjugacy theorems. Finally, applications are made to the design of third-order achromats for use in accelerators, to the design of subangstroem resolution electron microscopes, and to the classification and study of high order aberrations in light optics. (orig.)

  18. An advanced analysis method of initial orbit determination with too short arc data

    Science.gov (United States)

    Li, Binzhe; Fang, Li

    2018-02-01

    This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.

  19. Technology Alignment and Portfolio Prioritization (TAPP): Advanced Methods in Strategic Analysis, Technology Forecasting and Long Term Planning for Human Exploration and Operations, Advanced Exploration Systems and Advanced Concepts

    Science.gov (United States)

    Funaro, Gregory V.; Alexander, Reginald A.

    2015-01-01

    The Advanced Concepts Office (ACO) at NASA, Marshall Space Flight Center is expanding its current technology assessment methodologies. ACO is developing a framework called TAPP that uses a variety of methods, such as association mining and rule learning from data mining, structure development using a Technological Innovation System (TIS), and social network modeling to measure structural relationships. The role of ACO is to 1) produce a broad spectrum of ideas and alternatives for a variety of NASA's missions, 2) determine mission architecture feasibility and appropriateness to NASA's strategic plans, and 3) define a project in enough detail to establish an initial baseline capable of meeting mission objectives ACO's role supports the decision­-making process associated with the maturation of concepts for traveling through, living in, and understanding space. ACO performs concept studies and technology assessments to determine the degree of alignment between mission objectives and new technologies. The first step in technology assessment is to identify the current technology maturity in terms of a technology readiness level (TRL). The second step is to determine the difficulty associated with advancing a technology from one state to the next state. NASA has used TRLs since 1970 and ACO formalized them in 1995. The DoD, ESA, Oil & Gas, and DoE have adopted TRLs as a means to assess technology maturity. However, "with the emergence of more complex systems and system of systems, it has been increasingly recognized that TRL assessments have limitations, especially when considering [the] integration of complex systems." When performing the second step in a technology assessment, NASA requires that an Advancement Degree of Difficulty (AD2) method be utilized. NASA has used and developed or used a variety of methods to perform this step: Expert Opinion or Delphi Approach, Value Engineering or Value Stream, Analytical Hierarchy Process (AHP), Technique for the Order of

  20. Advanced aircraft service life monitoring method via flight-by-flight load spectra

    Science.gov (United States)

    Lee, Hongchul

    This research is an effort to understand current method and to propose an advanced method for Damage Tolerance Analysis (DTA) for the purpose of monitoring the aircraft service life. As one of tasks in the DTA, the current indirect Individual Aircraft Tracking (IAT) method for the F-16C/D Block 32 does not properly represent changes in flight usage severity affecting structural fatigue life. Therefore, an advanced aircraft service life monitoring method based on flight-by-flight load spectra is proposed and recommended for IAT program to track consumed fatigue life as an alternative to the current method which is based on the crack severity index (CSI) value. Damage Tolerance is one of aircraft design philosophies to ensure that aging aircrafts satisfy structural reliability in terms of fatigue failures throughout their service periods. IAT program, one of the most important tasks of DTA, is able to track potential structural crack growth at critical areas in the major airframe structural components of individual aircraft. The F-16C/D aircraft is equipped with a flight data recorder to monitor flight usage and provide the data to support structural load analysis. However, limited memory of flight data recorder allows user to monitor individual aircraft fatigue usage in terms of only the vertical inertia (NzW) data for calculating Crack Severity Index (CSI) value which defines the relative maneuver severity. Current IAT method for the F-16C/D Block 32 based on CSI value calculated from NzW is shown to be not accurate enough to monitor individual aircraft fatigue usage due to several problems. The proposed advanced aircraft service life monitoring method based on flight-by-flight load spectra is recommended as an improved method for the F-16C/D Block 32 aircraft. Flight-by-flight load spectra was generated from downloaded Crash Survival Flight Data Recorder (CSFDR) data by calculating loads for each time hack in selected flight data utilizing loads equations. From

  1. The role of advanced MR methods in the diagnosis of cerebral amyloidoma.

    Science.gov (United States)

    Nossek, Erez; Bashat, Dafna Ben; Artzi, Moran; Rosenberg, Keren; Lichter, Irith; Shtern, Orit; Ami, Haim Ben; Aizenstein, Orna; Vlodavsky, Euvgeni; Constantinescu, Marius; Ram, Zvi

    2009-01-01

    Amyloidoma is a term referring to a tumor-like deposition of extracellular insoluble fibrillar protein. Tumor-like amyloid formation in the brain had been described in isolated cases. However no advanced radiological studies to characterize these lesions have been reported. In the report, we have describe a 59-year-old woman, presented several months prior to diagnosis with memory decline, dizziness, walking instability, and speech difficulties. MRI revealed a left basal ganglia lesion with an intraventricular component. The patient underwent a stereotactic biopsy, which confirmed the diagnosis of amyloidoma, an extensive radiographic characterization of amyloidoma using advanced MR techniques was done, including magnetic resonance spectroscopy, dynamic susceptibility contrast, susceptibility weighted image (SWI), and magnetization transfer (MTR). All advanced MR techniques were able to characterize the amyloidoma as a non-neoplastic process. This is an example where such methods can be used for differential diagnosis of atypical brain lesions.

  2. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  3. GREY STATISTICS METHOD OF TECHNOLOGY SELECTION FOR ADVANCED PUBLIC TRANSPORTATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Chien Hung WEI

    2003-01-01

    Full Text Available Taiwan is involved in intelligent transportation systems planning, and is now selecting its prior focus areas for investment and development. The high social and economic impact associated with which intelligent transportation systems technology are chosen explains the efforts of various electronics and transportation corporations for developing intelligent transportation systems technology to expand their business opportunities. However, there has been no detailed research conducted with regard to selecting technology for advanced public transportation systems in Taiwan. Thus, the present paper demonstrates a grey statistics method integrated with a scenario method for solving the problem of selecting advanced public transportation systems technology for Taiwan. A comprehensive questionnaire survey was conducted to demonstrate the effectiveness of the grey statistics method. The proposed approach indicated that contactless smart card technology is the appropriate technology for Taiwan to develop in the near future. The significance of our research results implies that the grey statistics method is an effective method for selecting advanced public transportation systems technologies. We feel our information will be beneficial to the private sector for developing an appropriate intelligent transportation systems technology strategy.

  4. Balancing of linkages and robot manipulators advanced methods with illustrative examples

    CERN Document Server

    Arakelian, Vigen

    2015-01-01

    In this book advanced balancing methods for planar and spatial linkages, hand operated and automatic robot manipulators are presented. It is organized into three main parts and eight chapters. The main parts are the introduction to balancing, the balancing of linkages and the balancing of robot manipulators. The review of state-of-the-art literature including more than 500 references discloses particularities of shaking force/moment balancing and gravity compensation methods. Then new methods for balancing of linkages are considered. Methods provided in the second part of the book deal with the partial and complete shaking force/moment balancing of various linkages. A new field for balancing methods applications is the design of mechanical systems for fast manipulation. Special attention is given to the shaking force/moment balancing of robot manipulators. Gravity balancing methods are also discussed. The suggested balancing methods are illustrated by numerous examples.

  5. Method Verification Requirements for an Advanced Imaging System for Microbial Plate Count Enumeration.

    Science.gov (United States)

    Jones, David; Cundell, Tony

    2018-01-01

    The Growth Direct™ System that automates the incubation and reading of membrane filtration microbial counts on soybean-casein digest, Sabouraud dextrose, and R2A agar differs only from the traditional method in that micro-colonies on the membrane are counted using an advanced imaging system up to 50% earlier in the incubation. Based on the recommendations in USP Validation of New Microbiological Testing Methods , the system may be implemented in a microbiology laboratory after simple method verification and not a full method validation. LAY ABSTRACT: The Growth Direct™ System that automates the incubation and reading of microbial counts on membranes on solid agar differs only from the traditional method in that micro-colonies on the membrane are counted using an advanced imaging system up to 50% earlier in the incubation time. Based on the recommendations in USP Validation of New Microbiological Testing Methods , the system may be implemented in a microbiology laboratory after simple method verification and not a full method validation. © PDA, Inc. 2018.

  6. Study of thermodynamic and structural properties of a flexible homopolymer chain using advanced Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Hammou Amine Bouziane

    2013-03-01

    Full Text Available We study the thermodynamic and structural properties of a flexible homopolymer chain using both multi canonical Monte Carlo method and Wang-Landau method. In this work, we focus on the coil-globule transition. Starting from a completely random chain, we have obtained a globule for different sizes of the chain. The implementation of these advanced Monte Carlo methods allowed us to obtain a flat histogram in energy space and calculate various thermodynamic quantities such as the density of states, the free energy and the specific heat. Structural quantities such as the radius of gyration where also calculated.

  7. Curing Characterisation of Spruce Tannin-based Foams using the Advanced Isoconversional Method

    Directory of Open Access Journals (Sweden)

    Matjaž Čop

    2014-06-01

    Full Text Available The curing kinetics of foam prepared from the tannin of spruce tree bark was investigated using differential scanning calorimetry (DSC and the advanced isoconversional method. An analysis of the formulations with differing amounts of components (furfuryl alcohol, glycerol, tannin, and a catalyst showed that curing was delayed with increasing proportions of glycerol or tannins. An optimum amount of the catalyst constituent was also found during the study. The curing of the foam system was accelerated with increasing temperatures. Finally, the advanced isoconversional method, based on the model-free kinetic algorithm developed by Vyazovkin, appeared to be an appropriate model for the characterisation of the curing kinetics of tannin-based foams.

  8. Calculation of Hydrodynamic Characteristics of Weis-Fogh Type Water Turbine Using the Advanced Vortex Method

    International Nuclear Information System (INIS)

    Ro, Ki Deok

    2014-01-01

    In this study, the hydrodynamic characteristics of Weis-Fogh type water turbine were calculated by the advanced vortex method. The wing (NACA0010 airfoil) and both channel walls were approximated by source and vortex panels, and free vortices are introduced away from the body surfaces. The distance from the trailing edge of the wing to the wing axis, the width of the water channel and the maximum opening angle were selected as the calculation parameters, the important design factors. The maximum efficiency and the power coefficient for one wing of this water turbine were 26% and 0.4 at velocity ratio U/V = 2.0 respectively. The flow field of this water turbine is very complex because the wing moves unsteadily in the channel. However, using the advanced vortex method, it could be calculated accurately

  9. Calculation of Hydrodynamic Characteristics of Weis-Fogh Type Water Turbine Using the Advanced Vortex Method

    Energy Technology Data Exchange (ETDEWEB)

    Ro, Ki Deok [Gyeongsang Nat' l Univ., Jinju (Korea, Republic of)

    2014-03-15

    In this study, the hydrodynamic characteristics of Weis-Fogh type water turbine were calculated by the advanced vortex method. The wing (NACA0010 airfoil) and both channel walls were approximated by source and vortex panels, and free vortices are introduced away from the body surfaces. The distance from the trailing edge of the wing to the wing axis, the width of the water channel and the maximum opening angle were selected as the calculation parameters, the important design factors. The maximum efficiency and the power coefficient for one wing of this water turbine were 26% and 0.4 at velocity ratio U/V = 2.0 respectively. The flow field of this water turbine is very complex because the wing moves unsteadily in the channel. However, using the advanced vortex method, it could be calculated accurately.

  10. Application of advanced data reduction methods to gas turbine dynamic analysis

    International Nuclear Information System (INIS)

    Juhl, P.B.

    1978-01-01

    This paper discusses the application of advanced data reduction methods to the evaluation of dynamic data from gas turbines and turbine components. The use of the Fast Fourier Transform and of real-time spectrum analyzers is discussed. The use of power spectral density and probability density functions for analyzing random data is discussed. Examples of the application of these modern techniques to gas turbine testing are presented. The use of the computer to automate the data reduction procedures is discussed. (orig.) [de

  11. Turbulence-cascade interaction noise using an advanced digital filter method

    OpenAIRE

    Gea Aguilera, Fernando; Gill, James; Zhang, Xin; Nodé-Langlois, Thomas

    2016-01-01

    Fan wakes interacting with outlet guide vanes is a major source of noise in modern turbofan engines. In order to study this source of noise, the current work presents two-dimensional simulations of turbulence-cascade interaction noise using a computational aeroacoustic methodology. An advanced digital filter method is used for the generation of isotropic synthetic turbulence in a linearised Euler equation solver. A parameter study is presented to assess the influence of airfoil thickness, mea...

  12. MSFC Advanced Concepts Office and the Iterative Launch Vehicle Concept Method

    Science.gov (United States)

    Creech, Dennis

    2011-01-01

    This slide presentation reviews the work of the Advanced Concepts Office (ACO) at Marshall Space Flight Center (MSFC) with particular emphasis on the method used to model launch vehicles using INTegrated ROcket Sizing (INTROS), a modeling system that assists in establishing the launch concept design, and stage sizing, and facilitates the integration of exterior analytic efforts, vehicle architecture studies, and technology and system trades and parameter sensitivities.

  13. Linking advanced biofuels policies with stakeholder interests: A method building on Quality Function Deployment

    International Nuclear Information System (INIS)

    Schillo, R. Sandra; Isabelle, Diane A.; Shakiba, Abtin

    2017-01-01

    The field of renewable energy policy is inherently complex due to the long-term impacts of its policies, the broad range of potential stakeholders, the intricacy of scientific, engineering and technological developments, and the interplay of complex policy mixes that may result in unintended consequences. Quality Function Deployment (QFD) provides a systematic consideration of all relevant stakeholders, a rigorous analysis of the needs of stakeholders, and a prioritization of design features based on stakeholders needs. We build on QFD combined with Analytical Hierarchy Process (AHP) to develop a novel method applied to the area of advanced biofuel policies. This Multi-Stakeholder Policy QFD (MSP QFD) provides a systematic approach to capture the voice of the stakeholders and align it with the broad range of potential advanced biofuels policies. To account for the policy environment, the MSP QFD utilizes a novel approach to stakeholder importance weights. This MSP QFD adds to the literature as it permits the analysis of the broad range of relevant national policies with regards to the development of advanced biofuels, as compared to more narrowly focused typical QFD applications. It also allows policy developers to gain additional insights into the perceived impacts of policies, as well as international comparisons. - Highlights: • Advanced biofuels are mostly still in research and early commercialization stages. • Government policies are expected to support biofuels stakeholders in market entry. • A Multi-Stakeholder Policy QFD (MSP QFD) links biofuels policies with stakeholders. • MSP QFD employs novel stakeholder weights method. • The case of advanced biofuels in Canada shows comparative importance of policies.

  14. Development and application of a probabilistic evaluation method for advanced process technologies

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H.C.; Rubin, E.S.

    1991-04-01

    The objective of this work is to develop and apply a method for research planning for advanced process technologies. To satisfy requirements for research planning, it is necessary to: (1) identify robust solutions to process design questions in the face of uncertainty to eliminate inferior design options; (2) identify key problem areas in a technology that should be the focus of further research to reduce the risk of technology failure; (3) compare competing technologies on a consistent basis to determine the risks associated with adopting a new technology; and (4) evaluate the effects that additional research might have on comparisons with conventional technology. An important class of process technologies are electric power plants. In particular, advanced clean coal technologies are expected to play a key role in the energy and environmental future of the US, as well as in other countries. Research planning for advanced clean coal technology development is an important part of energy and environmental policy. Thus, the research planning method developed here is applied to case studies focusing on a specific clean coal technology. The purpose of the case studies is both to demonstrate the research planning method and to obtain technology-specific conclusions regarding research strategies.

  15. Development and application of a probabilistic evaluation method for advanced process technologies. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H.C.; Rubin, E.S.

    1991-04-01

    The objective of this work is to develop and apply a method for research planning for advanced process technologies. To satisfy requirements for research planning, it is necessary to: (1) identify robust solutions to process design questions in the face of uncertainty to eliminate inferior design options; (2) identify key problem areas in a technology that should be the focus of further research to reduce the risk of technology failure; (3) compare competing technologies on a consistent basis to determine the risks associated with adopting a new technology; and (4) evaluate the effects that additional research might have on comparisons with conventional technology. An important class of process technologies are electric power plants. In particular, advanced clean coal technologies are expected to play a key role in the energy and environmental future of the US, as well as in other countries. Research planning for advanced clean coal technology development is an important part of energy and environmental policy. Thus, the research planning method developed here is applied to case studies focusing on a specific clean coal technology. The purpose of the case studies is both to demonstrate the research planning method and to obtain technology-specific conclusions regarding research strategies.

  16. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.; Chen, Y.; Pereslavtsev, P.; Simakov, S.P.; Tsige-Tamirat, H.; Loughlin, M.; Perel, R.L.; Petrizzi, L.; Tautges, T.J.; Wilson, P.P.H.

    2005-01-01

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  17. Features of an advanced human reliability analysis method, AGAPE-ET

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Jung, Won Dea; Park, Jin Kyun

    2005-01-01

    This paper presents the main features of an advanced human reliability analysis (HRA) method, AGAPE-ET. It has the capabilities to deal with the diagnosis failures and the errors of commission (EOC), which have not been normally treated in the conventional HRAs. For the analysis of the potential for diagnosis failures, an analysis framework, which is called the misdiagnosis tree analysis (MDTA), and a taxonomy of the misdiagnosis causes with appropriate quantification schemes are provided. For the identification of the EOC events from the misdiagnosis, some procedural guidance is given. An example of the application of the method is also provided

  18. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  19. Features of an advanced human reliability analysis method, AGAPE-ET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea; Park, Jin Kyun [Korea Atomic Energy Research Institute, Taejeon (Korea, Republic of)

    2005-11-15

    This paper presents the main features of an advanced human reliability analysis (HRA) method, AGAPE-ET. It has the capabilities to deal with the diagnosis failures and the errors of commission (EOC), which have not been normally treated in the conventional HRAs. For the analysis of the potential for diagnosis failures, an analysis framework, which is called the misdiagnosis tree analysis (MDTA), and a taxonomy of the misdiagnosis causes with appropriate quantification schemes are provided. For the identification of the EOC events from the misdiagnosis, some procedural guidance is given. An example of the application of the method is also provided.

  20. Selenium contaminated waters: An overview of analytical methods, treatment options and recent advances in sorption methods.

    Science.gov (United States)

    Santos, Sílvia; Ungureanu, Gabriela; Boaventura, Rui; Botelho, Cidália

    2015-07-15

    Selenium is an essential trace element for many organisms, including humans, but it is bioaccumulative and toxic at higher than homeostatic levels. Both selenium deficiency and toxicity are problems around the world. Mines, coal-fired power plants, oil refineries and agriculture are important examples of anthropogenic sources, generating contaminated waters and wastewaters. For reasons of human health and ecotoxicity, selenium concentration has to be controlled in drinking-water and in wastewater, as it is a potential pollutant of water bodies. This review article provides firstly a general overview about selenium distribution, sources, chemistry, toxicity and environmental impact. Analytical techniques used for Se determination and speciation and water and wastewater treatment options are reviewed. In particular, published works on adsorption as a treatment method for Se removal from aqueous solutions are critically analyzed. Recent published literature has given particular attention to the development and search for effective adsorbents, including low-cost alternative materials. Published works mostly consist in exploratory findings and laboratory-scale experiments. Binary metal oxides and LDHs (layered double hydroxides) have presented excellent adsorption capacities for selenium species. Unconventional sorbents (algae, agricultural wastes and other biomaterials), in raw or modified forms, have also led to very interesting results with the advantage of their availability and low-cost. Some directions to be considered in future works are also suggested. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  2. Impact of different treatment methods on survival in advanced pancreatic cancer

    International Nuclear Information System (INIS)

    Brasiuniene, B.; Juozaityte, E.; Barauskas, G.

    2005-01-01

    The aim of the study was to evaluate the impact of different treatment methods on survival of patients treated for advanced pancreatic cancer at Kaunas University of Medicine Hospital from 1987 to 2003. Data on 262 patients with advanced pancreatic cancer treated from 1987 to 2003 were analyzed retrospectively. Four groups of patients were analyzed. One hundred eighty patients underwent palliative bypass or endoscopic bile duct stenting or observation alone. Forty three patients in addition to surgery were treated by radiotherapy. Twenty five patients received gemcitabine in standard doses and schedules. Fourteen patients received concomitant chemoradiotherapy (with gemcitabine or 5-fluorouracil). All patients were grouped by treatment method and median survival was analyzed. Median survival of patients treated by palliative surgery only or observation alone was 1.9 month, and for patients treated by palliative surgery and radiotherapy was 6.1 months (p=0.00007). Median survival of patients treated with gemcitabine was 9.5 months (p<0.001), and median survival of patients treated with concomitant chemoradiotherapy was 8.5 months (p=0.00003). Patients diagnosed with advanced pancreatic cancer in addition to surgical treatment should be treated by chemotherapy, concomitant chemoradiotherapy or radiotherapy. (author)

  3. Advanced image based methods for structural integrity monitoring: Review and prospects

    Science.gov (United States)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  4. Advances in mixed-integer programming methods for chemical production scheduling.

    Science.gov (United States)

    Velez, Sara; Maravelias, Christos T

    2014-01-01

    The goal of this paper is to critically review advances in the area of chemical production scheduling over the past three decades and then present two recently proposed solution methods that have led to dramatic computational enhancements. First, we present a general framework and problem classification and discuss modeling and solution methods with an emphasis on mixed-integer programming (MIP) techniques. Second, we present two solution methods: (a) a constraint propagation algorithm that allows us to compute parameters that are then used to tighten MIP scheduling models and (b) a reformulation that introduces new variables, thus leading to effective branching. We also present computational results and an example illustrating how these methods are implemented, as well as the resulting enhancements. We close with a discussion of open research challenges and future research directions.

  5. Advances in dynamic and mean field games theory, applications, and numerical methods

    CERN Document Server

    Viscolani, Bruno

    2017-01-01

    This contributed volume considers recent advances in dynamic games and their applications, based on presentations given at the 17th Symposium of the International Society of Dynamic Games, held July 12-15, 2016, in Urbino, Italy. Written by experts in their respective disciplines, these papers cover various aspects of dynamic game theory including mean-field games, stochastic and pursuit-evasion games, and computational methods for dynamic games. Topics covered include Pedestrian flow in crowded environments Models for climate change negotiations Nash Equilibria for dynamic games involving Volterra integral equations Differential games in healthcare markets Linear-quadratic Gaussian dynamic games Aircraft control in wind shear conditions Advances in Dynamic and Mean-Field Games presents state-of-the-art research in a wide spectrum of areas. As such, it serves as a testament to the continued vitality and growth of the field of dynamic games and their applications. It will be of interest to an interdisciplinar...

  6. Methods for measuring the spectral reflectivity of advanced materials at high temperature

    International Nuclear Information System (INIS)

    Salikhov, T.P.; Kan, V.V.

    1993-01-01

    For investigation in the domain of advanced materials as well as for new technologies there is an urgent need for knowledge of the spectral reflectivity of the materials specially at high temperatures. However the methods available are mostly intended for measuring the model materials with specular or diffuse reflection surface. This is not quite correct since advanced materials have mixed specular diffuse reflection surfaces. New methods for reflectivity measurements of materials in the visible, near and middle infrared range at high temperature, regardless of surface texture, have been developed. The advantages of the methods proposed are as flows: (a) the facility of performing the reflectivity measurements for materials with mixed specular diffuse reflectance; (b) wide spectral range 0,38-8 micro m; (c) wide temperature range 300-3000 K; (d) high accuracy and rapid measurements. The methods are based on the following principals (i) Diffuse irradiation of the sample surface and the use of Helkholtz reciprocity principle to determine the directional hemispherical reflectivity ii) Pulse polychromatic probing of the sample by additional light source. The first principle excludes the influence of the angular reflection distribution of sample surface on data obtained. The second principle gives the possibility of simultaneous measurements of the reflectivity. The second principle gives the possibility of simultaneous measurements of the reflectivity in wide spectral range. On the basis of these principles for high temperature reflectometers have been developed and discussed here. (author)

  7. The promise of mixed-methods for advancing latino health research.

    Science.gov (United States)

    Apesoa-Varano, Ester Carolina; Hinton, Ladson

    2013-09-01

    Mixed-methods research in the social sciences has been conducted for quite some time. More recently, mixed-methods have become popular in health research, with the National Institutes of Health leading the impetus to fund studies that implement such an approach. The public health issues facing us today are great and they range from policy and other macro-level issues, to systems level problems to individuals' health behaviors. For Latinos, who are projected to become the largest minority group bearing a great deal of the burden of social inequality in the U.S., it is important to understand the deeply-rooted nature of these health disparities in order to close the gap in health outcomes. Mixed-methodology thus holds promise for advancing research on Latino heath by tackling health disparities from a variety of standpoints and approaches. The aim of this manuscript is to provide two examples of mixed methods research, each of which addresses a health topic of considerable importance to older Latinos and their families. These two examples will illustrate a) the complementary use of qualitative and quantitative methods to advance health of older Latinos in an area that is important from a public health perspective, and b) the "translation" of findings from observational studies (informed by social science and medicine) to the development and testing of interventions.

  8. Locally advanced cancer of the tongue base: new method of surgical treatment

    Directory of Open Access Journals (Sweden)

    I. A. Zaderenko

    2018-01-01

    Full Text Available Introduction. Patients are characterized by locally advanced tumors in 70–80 % of cases at presentation, so possibility of cure and surgical treatment is limited. Total glossectomy, tongue base resection is associated with severe and permanent disability. Such surgical procedures lead to severe dysphagia, alalia and social maladjustment. Enumerated issues motivated us to develop new method of surgical treatment  of locally advanced base of tongue cancer.Objective is to introduce new opportunities of surgical treatment of locally advanced cancer of the tongue base.Materials and methods. Glossectomy is accomplished in 5 patients suffering from tongue cancer and admitted to N.N. Blokhin National Medical Research Center of Oncology. Swallowing and speech is preserved in all 5 cases.Results. The main advantage of the proposed method is that the cut out muscle flap has a different innervation from different cranial nerves involved in the rate of swallowing, so there is not just a mechanical movement of the epiglottis, but also the control of swallowing by the central nervous system. The reduction of injury and operation time in the proposed method is due to the fact that tissues directly contacting with the defect are used to preserve swallowing and speech. The proposed muscle flap has various sources of blood supply, which improves its nutrition and reduces the risk of complications, and healing occurs in a shorter time in comparison with the prototype. All of the above reduces the duration of hospitalization for an average of 7–9 days.Conclusion. The developed surgical technique allows to achieve early rehabilitation; patients are able to breathe effortlessly, swallow and speak. There is no need in permanent tracheostoma and percutaneous endoscopic gastrostomy tube. All patients remains socially active. 

  9. The Water-Energy-Food Nexus: Advancing Innovative, Policy-Relevant Methods

    Science.gov (United States)

    Crootof, A.; Albrecht, T.; Scott, C. A.

    2017-12-01

    The water-energy-food (WEF) nexus is rapidly expanding in scholarly literature and policy settings as a novel way to address complex Anthropocene challenges. The nexus approach aims to identify tradeoffs and synergies of water, energy, and food systems, internalize social and environmental impacts, and guide development of cross-sectoral policies. However, a primary limitation of the nexus approach is the absence - or gaps and inconsistent use - of adequate methods to advance an innovative and policy-relevant nexus approach. This paper presents an analytical framework to identify robust nexus methods that align with nexus thinking and highlights innovative nexus methods at the frontier. The current state of nexus methods was assessed with a systematic review of 245 journal articles and book chapters. This review revealed (a) use of specific and reproducible methods for nexus assessment is uncommon - less than one-third of the reviewed studies present explicit methods; (b) nexus methods frequently fall short of capturing interactions among water, energy, and food - the very concept they purport to address; (c) assessments strongly favor quantitative approaches - 70% use primarily quantitative tools; (d) use of social science methods is limited (26%); and (e) many nexus methods are confined to disciplinary silos - only about one-quarter combine methods from diverse disciplines and less than one-fifth utilize both quantitative and qualitative approaches. Despite some pitfalls of current nexus methods, there are a host of studies that offer innovative approaches to help quantify nexus linkages and interactions among sectors, conceptualize dynamic feedbacks, and support mixed method approaches to better understand WEF systems. Applying our analytical framework to all 245 studies, we identify, and analyze herein, seventeen studies that implement innovative multi-method and cross-scalar tools to demonstrate promising advances toward improved nexus assessment. This paper

  10. An Advanced Actuator Line Method for Wind Energy Applications and Beyond: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Churchfield, Matthew; Schreck, Scott; Martinez-Tossas, Luis A.; Meneveau, Charles; Spalart, Philippe R.

    2017-03-24

    The actuator line method to represent rotor aerodynamics within computational fluid dynamics has been in use for over a decade. This method applies a body force to the flow field along rotating lines corresponding to the individual rotor blades and employs tabular airfoil data to compute the force distribution. The actuator line method is attractive because compared to blade-resolved simulations, the required mesh is much simpler and the computational cost is lower. This work proposes a higher fidelity variant of the actuator line method meant to fill the space between current actuator line and blade-resolved simulations. It contains modifications in two key areas. The first is that of freestream velocity vector estimation along the line, which is necessary to compute the lift and drag along the line using tabular airfoil data. Most current methods rely on point sampling in which the location of sampling is ambiguous. Here we test a velocity sampling method that uses a properly weighted integral over space, removing this ambiguity. The second area of improvement is the function used to project the one-dimensional actuator line force onto the three-dimensional fluid mesh as a body force. We propose and test a projection function that spreads the force over a region that looks something like a real blade with the hope that it will produce the blade local and near wake flow features with more accuracy and higher fidelity. Our goal is that between these two improvements, not only will the flow field predictions be enhanced, but also the spanwise loading will be made more accurate. We refer to this combination of improvements as the advanced actuator line method. We apply these improvements to two different wind turbine cases. Although there is a strong wind energy motivation in our work, there is no reason these advanced actuator line ideas cannot be used in other applications, such as helicopter rotors.

  11. An Advanced Actuator Line Method for Wind Energy Applications and Beyond

    Energy Technology Data Exchange (ETDEWEB)

    Churchfield, Matthew J.; Schreck, Scott; Martinez-Tossas, Luis A.; Meneveau, Charles; Spalart, Philippe R.

    2017-01-09

    The actuator line method to represent rotor aerodynamics within computational fluid dynamics has been in use for over a decade. This method applies a body force to the flow field along rotating lines corresponding to the individual rotor blades and employs tabular airfoil data to compute the force distribution. The actuator line method is attractive because compared to blade-resolved simulations, the required mesh is much simpler and the computational cost is lower. This work proposes a higher fidelity variant of the actuator line method meant to fill the space between current actuator line and blade-resolved simulations. It contains modifications in two key areas. The first is that of freestream velocity vector estimation along the line, which is necessary to compute the lift and drag along the line using tabular airfoil data. Most current methods rely on point sampling in which the location of sampling is ambiguous. Here we test a velocity sampling method that uses a properly weighted integral over space, removing this ambiguity. The second area of improvement is the function used to project the one-dimensional actuator line force onto the three-dimensional fluid mesh as a body force. We propose and test a projection function that spreads the force over a region that looks something like a real blade with the hope that it will produce the blade local and near wake flow features with more accuracy and higher fidelity. Our goal is that between these two improvements, not only will the flow field predictions be enhanced, but also the spanwise loading will be made more accurate. We refer to this combination of improvements as the advanced actuator line method. We apply these improvements to two different wind turbine cases. Although there is a strong wind energy motivation in our work, there is no reason these advanced actuator line ideas cannot be used in other applications, such as helicopter rotors.

  12. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    Directory of Open Access Journals (Sweden)

    Shukui Liu

    2011-03-01

    Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  13. A complex method of equipment replacement planning. An advanced plan for the replacement of medical equipment.

    Science.gov (United States)

    Dondelinger, Robert M

    2004-01-01

    This complex method of equipment replacement planning is a methodology; it is a means to an end, a process that focuses on equipment most in need of replacement, rather than the end itself. It uses data available from the maintenance management database, and attempts to quantify those subjective items important [figure: see text] in making equipment replacement decisions. Like the simple method of the last issue, it is a starting point--albeit an advanced starting point--which the user can modify to fit their particular organization, but the complex method leaves room for expansion. It is based on sound logic, documented facts, and is fully defensible during the decision-making process and will serve your organization well as provide a structure for your equipment replacement planning decisions.

  14. Printing, folding and assembly methods for forming 3D mesostructures in advanced materials

    Science.gov (United States)

    Zhang, Yihui; Zhang, Fan; Yan, Zheng; Ma, Qiang; Li, Xiuling; Huang, Yonggang; Rogers, John A.

    2017-03-01

    A rapidly expanding area of research in materials science involves the development of routes to complex 3D structures with feature sizes in the mesoscopic range (that is, between tens of nanometres and hundreds of micrometres). A goal is to establish methods for controlling the properties of materials systems and the function of devices constructed with them, not only through chemistry and morphology, but also through 3D architectures. The resulting systems, sometimes referred to as metamaterials, offer engineered behaviours with optical, thermal, acoustic, mechanical and electronic properties that do not occur in the natural world. Impressive advances in 3D printing techniques represent some of the most broadly recognized developments in this field, but recent successes with strategies based on concepts in origami, kirigami and deterministic assembly provide additional, unique options in 3D design and high-performance materials. In this Review, we highlight the latest progress and trends in methods for fabricating 3D mesostructures, beginning with the development of advanced material inks for nozzle-based approaches to 3D printing and new schemes for 3D optical patterning. In subsequent sections, we summarize more recent methods based on folding, rolling and mechanical assembly, including their application with materials such as designer hydrogels, monocrystalline inorganic semiconductors and graphene.

  15. CHF predictor derived from a 3D thermal-hydraulic code and an advanced statistical method

    International Nuclear Information System (INIS)

    Banner, D.; Aubry, S.

    2004-01-01

    A rod bundle CHF predictor has been determined by using a 3D code (THYC) to compute local thermal-hydraulic conditions at the boiling crisis location. These local parameters have been correlated to the critical heat flux by using an advanced statistical method based on spline functions. The main characteristics of the predictor are presented in conjunction with a detailed analysis of predictions (P/M ratio) in order to prove that the usual safety methodology can be applied with such a predictor. A thermal-hydraulic design criterion is obtained (1.13) and the predictor is compared with the WRB-1 correlation. (author)

  16. Systems and methods for advanced ultra-high-performance InP solar cells

    Science.gov (United States)

    Wanlass, Mark

    2017-03-07

    Systems and Methods for Advanced Ultra-High-Performance InP Solar Cells are provided. In one embodiment, an InP photovoltaic device comprises: a p-n junction absorber layer comprising at least one InP layer; a front surface confinement layer; and a back surface confinement layer; wherein either the front surface confinement layer or the back surface confinement layer forms part of a High-Low (HL) doping architecture; and wherein either the front surface confinement layer or the back surface confinement layer forms part of a heterointerface system architecture.

  17. Advances in research methods for information systems research data mining, data envelopment analysis, value focused thinking

    CERN Document Server

    Osei-Bryson, Kweku-Muata

    2013-01-01

    Advances in social science research methodologies and data analytic methods are changing the way research in information systems is conducted. New developments in statistical software technologies for data mining (DM) such as regression splines or decision tree induction can be used to assist researchers in systematic post-positivist theory testing and development. Established management science techniques like data envelopment analysis (DEA), and value focused thinking (VFT) can be used in combination with traditional statistical analysis and data mining techniques to more effectively explore

  18. Advanced fabrication method for the preparation of MOF thin films: Liquid-phase epitaxy approach meets spin coating method.

    KAUST Repository

    Chernikova, Valeriya

    2016-07-14

    Here we report a new and advanced method for the fabrication of highly oriented/polycrystalline metal-organic framework (MOF) thin films. Building on the attractive features of the liquid-phase epitaxy (LPE) approach, a facile spin coating method was implemented to generate MOF thin films in a high-throughput fashion. Advantageously, this approach offers a great prospective to cost-effectively construct thin-films with a significantly shortened preparation time and a lessened chemicals and solvents consumption, as compared to the conventional LPE-process. Certainly, this new spin-coating approach has been implemented successfully to construct various MOF thin films, ranging in thickness from a few micrometers down to the nanometer scale, spanning 2-D and 3-D benchmark MOF materials including Cu2(bdc)2•xH2O, Zn2(bdc)2•xH2O, HKUST-1 and ZIF-8. This method was appraised and proved effective on a variety of substrates comprising functionalized gold, silicon, glass, porous stainless steel and aluminum oxide. The facile, high-throughput and cost-effective nature of this approach, coupled with the successful thin film growth and substrate versatility, represents the next generation of methods for MOF thin film fabrication. Thereby paving the way for these unique MOF materials to address a wide range of challenges in the areas of sensing devices and membrane technology.

  19. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  20. Advanced display object selection methods for enhancing user-computer productivity

    Science.gov (United States)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  1. Advanced methods comparisons of reaction rates in the Purdue Fast Breeder Blanket Facility

    International Nuclear Information System (INIS)

    Hill, R.N.; Ott, K.O.

    1988-01-01

    A review of worldwide results revealed that reaction rates in the blanket region are generally underpredicted with the discrepancy increasing with penetration; however, these results vary widely. Experiments in the large uniform Purdue Fast Breeder Blanket Facility (FBBF) blanket yield an accurate quantification of this discrepancy. Using standard production code methods (diffusion theory with 50 group cross sections), a consistent Calculated/Experimental (C/E) drop-off was observed for various reaction rates. A 50% increase in the calculated results at the outer edge of the blanket is necessary for agreement with experiments. The usefulness of refined group constant generation utilizing specialized weighting spectra and transport theory methods in correcting this discrepancy was analyzed. Refined group constants reduce the discrepancy to half that observed using the standard method. The surprising result was that transport methods had no effect on the blanket deviations; thus, transport theory considerations do not constitute or even contribute to an explanation of the blanket discrepancies. The residual blanket C/E drop-off (about half the standard drop-off) using advanced methods must be caused by some approximations which are applied in all current methods. 27 refs., 3 figs., 1 tab

  2. Advanced methods for the study of PWR cores; Les methodes d'etudes avancees pour les coeurs de REP

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, M.; Salvatores, St.; Ferrier, A. [Electricite de France (EDF), Service Etudes et Projets Thermiques et Nucleaires, 92 - Courbevoie (France); Pelet, J.; Nicaise, N.; Pouliquen, J.Y.; Foret, F. [FRAMATOME ANP, 92 - Paris La Defence (France); Chauliac, C. [CEA Saclay, Dir. de l' Energie Nucleaire (DEN), 91 - Gif sur Yvette (France); Johner, J. [CEA Cadarache, Dept. de Recherches sur la Fusion Controlee (DRFC), 13 - Saint Paul lez Durance (France); Cohen, Ch

    2003-07-01

    This document gathers the transparencies presented at the 6. technical session of the French nuclear energy society (SFEN) in October 2003. The transparencies of the annual meeting are presented in the introductive part: 1 - status of the French nuclear park: nuclear energy results, management of an exceptional climatic situation: the heat wave of summer 2003 and the power generation (J.C. Barral); 2 - status of the research on controlled thermonuclear fusion (J. Johner). Then follows the technical session about the advanced methods for the study of PWR reactor cores: 1 - the evolution approach of study methodologies (M. Lambert, J. Pelet); 2 - the point of view of the nuclear safety authority (D. Brenot); 3 - the improved decoupled methodology for the steam pipe rupture (S. Salvatores, J.Y. Pouliquen); 4 - the MIR method for the pellet-clad interaction (renovated IPG methodology) (E. Baud, C. Royere); 5 - the improved fuel management (IFM) studies for Koeberg (C. Cohen); 6 - principle of the methods of accident study implemented for the European pressurized reactor (EPR) (F. Foret, A. Ferrier); 7 - accident studies with the EPR, steam pipe rupture (N. Nicaise, S. Salvatores); 8 - the co-development platform, a new generation of software tools for the new methodologies (C. Chauliac). (J.S.)

  3. Alternative oil extraction methods from Echium plantagineum L. seeds using advanced techniques and green solvents.

    Science.gov (United States)

    Castejón, Natalia; Luna, Pilar; Señoráns, Francisco J

    2018-04-01

    The edible oil processing industry involves large losses of organic solvent into the atmosphere and long extraction times. In this work, fast and environmentally friendly alternatives for the production of echium oil using green solvents are proposed. Advanced extraction techniques such as Pressurized Liquid Extraction (PLE), Microwave Assisted Extraction (MAE) and Ultrasound Assisted Extraction (UAE) were evaluated to efficiently extract omega-3 rich oil from Echium plantagineum seeds. Extractions were performed with ethyl acetate, ethanol, water and ethanol:water to develop a hexane-free processing method. Optimal PLE conditions with ethanol at 150 °C during 10 min produced a very similar oil yield (31.2%) to Soxhlet using hexane for 8 h (31.3%). UAE optimized method with ethanol at mild conditions (55 °C) produced a high oil yield (29.1%). Consequently, advanced extraction techniques showed good lipid yields and furthermore, the produced echium oil had the same omega-3 fatty acid composition than traditionally extracted oil. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A study on the economics enhancement of OPR1000 applied to advanced construction methods

    International Nuclear Information System (INIS)

    Park, Ki Jo; Yoon, Eun Sang

    2007-01-01

    OPR1000 (Optimized Power Reactor 1000MW) is a totally improved design model of Korea nuclear power plants and the latest 1,000MW nuclear power plant in the Republic of Korea. Shin Kori 1 and 2 and Shin Wolsong 1 and 2 and under construction and these are OPR1000 types. Although OPR1000 is up to data 1,000MW nuclear power plant, it is not enough to be much superior to other nuclear power plants. Under the WTO and FTA circumstance of domestic and stiff overseas competition for nuclear power plants, it is necessary to enhance the economics of OPR1000. And then, the enhanced economic alternatives are reviewed and the advanced construction methods are considered. Based on research and a comprehensive review of nuclear power plant construction experiences, an alternative application of advanced construction methods is developed and compared with existing OPR1000 for schedule and economics. In this paper, economic analyses of a construction cost and a levelized electricity generation cost are performed

  5. ARN Training Course on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Rojo, A.M.; Gomez Parada, I.; Puerta Yepes, N.; Gossio, S.

    2010-01-01

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. (authors)

  6. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  7. Research advances in theories and methods of community assembly and succession

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2014-09-01

    Full Text Available Community succession refers to the regular and predictable process of species replacement in the environment that all species had been eliminated or that had been disturbed. Community assembly is the process that species growth and interact to establish a community. Community assembly stresses the change of community over a single phase. So far a lot of theories and methods have been proposed for community assembly and succession. In present article I introduced research advances in theories and methods of community assembly and succession. Finally, continuing my past propositions, I further proposed the unified theory and methodology on community assembly and succession. I suggested that community assembly and succession is a process of self-organization. It follows the major principles and mechanisms of self-organization. Agentbased modeling was suggested being used to describe the dynamics of community assembly and succession.

  8. Application of advanced statistical methods in assessment of the late phase of a nuclear accident

    International Nuclear Information System (INIS)

    Hofman, R.

    2008-01-01

    The paper presents a new methodology for improving of estimates of radiological situation on terrain in the late phase of a nuclear accident. Methods of Bayesian filtering are applied to the problem. The estimates are based on combination of modeled and measured data provided by responsible authorities. Exploiting information on uncertainty of both the data sources, we are able to produce improved estimate of the true situation on terrain. We also attempt to account for model error, which is unknown and plays crucial role in accuracy of the estimates. The main contribution of this paper is application of an approach based on advanced statistical methods, which allows for estimating of model error covariance structure upon measurements. Model error is estimated on basis of measured-minus-observed residuals evaluated upon measured and modeled values. The methodology is demonstrated on a sample scenario with simulated measurements. (authors)

  9. Application of advanced statistical methods in assessment of the late phase of a nuclear accident

    International Nuclear Information System (INIS)

    Hofman, R.

    2009-01-01

    The paper presents a new methodology for improving of estimates of radiological situation on terrain in the late phase of a nuclear accident. Methods of Bayesian filtering are applied to the problem. The estimates are based on combination of modeled and measured data provided by responsible authorities. Exploiting information on uncertainty of both the data sources, we are able to produce improved estimate of the true situation on terrain. We also attempt to account for model error, which is unknown and plays crucial role in accuracy of the estimates. The main contribution of this paper is application of an approach based on advanced statistical methods, which allows for estimating of model error covariance structure upon measurements. Model error is estimated on basis of measured-minus-observed residuals evaluated upon measured and modeled values. The methodology is demonstrated on a sample scenario with simulated measurements. (authors)

  10. Recent advances in the identification and authentication methods of edible bird's nest.

    Science.gov (United States)

    Lee, Ting Hun; Wani, Waseem A; Koay, Yin Shin; Kavita, Supparmaniam; Tan, Eddie Ti Tjih; Shreaz, Sheikh

    2017-10-01

    Edible bird's nest (EBN) is an expensive animal bioproduct due to its reputation as a food and delicacy with diverse medicinal properties. One kilogram of EBN costs ~$6000 in China. EBN and its products are consumed in mostly Asian countries such as China, Hong Kong, Taiwan, Singapore, Malaysia, Indonesia, Vietnam and Thailand, making up almost 1/3 of world population. The rapid growth in EBN consumption has led to a big rise in the trade scale of its global market. Presently, various fake materials such as tremella fungus, pork skin, karaya gum, fish swimming bladder, jelly, agar, monosodium glutamate and egg white are used to adulterate EBNs for earning extra profits. Adulterated or fake EBN may be hazardous to the consumers. Thus, it is necessary to identify of the adulterants. Several sophisticated techniques based on genetics, immunochemistry, spectroscopy, chromatography and gel electrophoresis have been used for the detection of various types of adulterants in EBN. This article describes the recent advances in the authentication methods for EBN. Different genetic, immunochemical, spectroscopic and analytical methods such as genetics (DNA) based techniques, enzyme-linked immunosorbent assays, Fourier transform infrared and Raman spectroscopic techniques, and chromatographic and gel electrophoretic methods have been discussed. Besides, significance of the reported methods that might pertain them to applications in EBN industry has been described. Finally, efforts have been made to discuss the challenges and future perspectives of the authentication methods for EBN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation

    Science.gov (United States)

    Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.

    2000-01-01

    Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.

  12. NATO Advanced Research Workshop, 19-22 May 1997: Rapid Method for Monitoring the Environment for Biological Hazards

    National Research Council Canada - National Science Library

    1997-01-01

    The NATO Advanced Research Workshop met for the purpose of bringing to light rapid methods for monitoring the environment for biological hazards such as biological warfare agents, naturally occurring...

  13. Setting health research priorities using the CHNRI method: IV. Key conceptual advances.

    Science.gov (United States)

    Rudan, Igor

    2016-06-01

    Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007-2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost-effective way. The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority-setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the "4D" framework (description, delivery, development and discovery research) and a well-defined "depth" of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well-defined context and criteria. The five "standard" components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five "standard" criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority-setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of

  14. Numerical evaluation of fluid mixing phenomena in boiling water reactor using advanced interface tracking method

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    2008-01-01

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low. (author)

  15. Advances in methods for detection of anaerobic ammonium oxidizing (anammox) bacteria.

    Science.gov (United States)

    Li, Meng; Gu, Ji-Dong

    2011-05-01

    Anaerobic ammonium oxidation (anammox), the biochemical process oxidizing ammonium into dinitrogen gas using nitrite as an electron acceptor, has only been recognized for its significant role in the global nitrogen cycle not long ago, and its ubiquitous distribution in a wide range of environments has changed our knowledge about the contributors to the global nitrogen cycle. Currently, several groups of methods are used in detection of anammox bacteria based on their physiological and biochemical characteristics, cellular chemical composition, and both 16S rRNA gene and selective functional genes as biomarkers, including hydrazine oxidoreductase and nitrite reductase encoding genes hzo and nirS, respectively. Results from these methods coupling with advances in quantitative PCR, reverse transcription of mRNA genes and stable isotope labeling have improved our understanding on the distribution, diversity, and activity of anammox bacteria in different environments both natural and engineered ones. In this review, we summarize these methods used in detection of anammox bacteria from various environments, highlight the strengths and weakness of these methods, and also discuss the new development potentials on the existing and new techniques in the future.

  16. The Advanced Aluminum Nitride Synthesis Methods and Its Applications: Patent Review.

    Science.gov (United States)

    Shishkin, Roman A; Elagin, Andrey A; Mayorova, Ekaterina S; Beketov, Askold R

    2016-01-01

    High purity nanosized aluminum nitride synthesis is a current issue for both industry and science. However, there is no up-to-date review considering the major issues and the technical solutions for different methods. This review aims to investigate the advanced methods of aluminum nitride synthesis and its development tendencies. Also the aluminum nitride application patents and prospects for development of the branch have been considered. The patent search on "aluminum nitride synthesis" has been carried out. The research activity has been analyzed. Special attention has been paid to the patenting geography and the leading researchers in aluminum nitride synthesis. Aluminum nitride synthesis methods have been divided into 6 main groups, the most studied approaches are carbothermal reduction (88 patents) and direct nitridation (107 patents). The current issues for each group have been analyzed; the main trends are purification of the final product and nanopowder synthesis. The leading researchers in aluminum nitride synthesis have represented 5 countries, namely: Japan, China, Russia, South Korea and USA. The main aluminum nitride application spheres are electronics (59,1 percent of applications) and new materials manufacturing (30,9 percent). The review deals with the state of the art data in nanosized aluminum nitride synthesis, the major issues and the technical solutions for different synthesis methods. It gives a full understanding of the development tendencies and of the current leaders in the sphere.

  17. A study on dynamic evaluation methods for human-machine interfaces in advanced control rooms

    International Nuclear Information System (INIS)

    Park, Jin Kyun

    1998-02-01

    Extensive efforts have been performed to reveal factors that largely affect to the safety of nuclear power plants (NPPs). Among them, human factors were known as a dominant cause of a severe accident, such as Three Mile Island and Chernobyl accidents. Thus a lot of efforts to resolve human factors related problems have been spent, and one of these efforts is an advanced control room (ACR) design to enhance human performance and the safety of NPPs. There are two important trends in the design of ACRs. The first one is increasing automation level, and the second one is the development of computer based compact workstations for control room operations including intelligent operator aid systems. However, several problems have been reported when another factors are not properly incorporated into the design of ACRs. Among them, one of the most important factors that significantly affect to operator performance is the design of human machine interfaces (HMIs). Thus, HMI evaluation should be emphasized to ensure appropriateness of HMI designs and the safety of NPPs. In general, two kinds of evaluations have been frequently used to assess appropriateness of the proposed HMI design. The one is the static evaluation and the other is the dynamic evaluation. Here, the static evaluation is the one based on guidelines that are extracted from various researches on HMI designs. And the dynamic evaluation generally attempts to evaluate and predict human performance through a model that can describe cognitive behaviors of human or interactions between HMIs and human. However, the static evaluation seems to be inappropriate because it can't properly capture context of task environment that strongly affects to human performance. In addition, in case of dynamic evaluations, development of a model that can sufficiently describe interactions or cognitive behaviors of human operators is very arduous and laborious. To overcome these problems, dynamic evaluation methods that can

  18. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    Science.gov (United States)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  19. Advances and Perspectives in Chemical Imaging in Cellular Environments Using Electrochemical Methods

    Directory of Open Access Journals (Sweden)

    Robert A. Lazenby

    2018-05-01

    Full Text Available This review discusses a broad range of recent advances (2013–2017 in chemical imaging using electrochemical methods, with a particular focus on techniques that have been applied to study cellular processes, or techniques that show promise for use in this field in the future. Non-scanning techniques such as microelectrode arrays (MEAs offer high time-resolution (<10 ms imaging; however, at reduced spatial resolution. In contrast, scanning electrochemical probe microscopies (SEPMs offer higher spatial resolution (as low as a few nm per pixel imaging, with images collected typically over many minutes. Recent significant research efforts to improve the spatial resolution of SEPMs using nanoscale probes and to improve the temporal resolution using fast scanning have resulted in movie (multiple frame imaging with frame rates as low as a few seconds per image. Many SEPM techniques lack chemical specificity or have poor selectivity (defined by the choice of applied potential for redox-active species. This can be improved using multifunctional probes, ion-selective electrodes and tip-integrated biosensors, although additional effort may be required to preserve sensor performance after miniaturization of these probes. We discuss advances to the field of electrochemical imaging, and technological developments which are anticipated to extend the range of processes that can be studied. This includes imaging cellular processes with increased sensor selectivity and at much improved spatiotemporal resolution than has been previously customary.

  20. Production of advanced materials by methods of self-propagating high-temperature synthesis

    CERN Document Server

    Tavadze, Giorgi F

    2013-01-01

    This translation from the original Russian book outlines the production of a variety of materials by methods of self-propagating high-temperature synthesis (SHS). The types of materials discussed include: hard, refractory, corrosion and wear-resistant materials, as well as other advanced and speciality materials. The authors address the issue of optimal parameters for SHS reactions occurring during processes involving a preliminary metallothermic reduction stage, and they calculate this using thermodynamic approaches. In order to confirm the effectiveness of this approach, the authors describe experiments focussing on the synthesis of elemental crysalline boron, boron carbides and nitrides. Other parts of this brief include theoretical and experimental results on single-stage production of hard alloys on the basis of titanium and zirconium borides, as well as macrokinetics of degassing and compaciton of SHS-products.This brief is suitable for academics, as well as those working in industrial manufacturing com...

  1. Advanced Energy Storage Devices: Basic Principles, Analytical Methods, and Rational Materials Design

    Science.gov (United States)

    Liu, Jilei; Wang, Jin; Xu, Chaohe; Li, Chunzhong; Lin, Jianyi

    2017-01-01

    Abstract Tremendous efforts have been dedicated into the development of high‐performance energy storage devices with nanoscale design and hybrid approaches. The boundary between the electrochemical capacitors and batteries becomes less distinctive. The same material may display capacitive or battery‐like behavior depending on the electrode design and the charge storage guest ions. Therefore, the underlying mechanisms and the electrochemical processes occurring upon charge storage may be confusing for researchers who are new to the field as well as some of the chemists and material scientists already in the field. This review provides fundamentals of the similarities and differences between electrochemical capacitors and batteries from kinetic and material point of view. Basic techniques and analysis methods to distinguish the capacitive and battery‐like behavior are discussed. Furthermore, guidelines for material selection, the state‐of‐the‐art materials, and the electrode design rules to advanced electrode are proposed. PMID:29375964

  2. Hydrophilic and amphiphilic water pollutants: using advanced analytical methods for classic and emerging contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Giger, Walter [GRC, Giger Research Consulting, Zurich (Switzerland); Eawag, Swiss Federal Institute of Aquatic Science and Technology, Duebendorf (Switzerland)

    2009-01-15

    Organic pollutants are a highly relevant topic in environmental science and technology. This article briefly reviews historic developments, and then focuses on the current state of the art and future perspectives on the qualitative and quantitative trace determination of polar organic contaminants, which are of particular concern in municipal and industrial wastewater effluents, ambient surface waters, run-off waters, atmospheric waters, groundwaters and drinking waters. The pivotal role of advanced analytical methods is emphasized and an overview of some contaminant classes is presented. Some examples of polar water pollutants, which are discussed in a bit more detail here, are chosen from projects tackled by the research group led by the author of this article. (orig.)

  3. Further development of the Dynamic Control Assemblies Worth Measurement Method for Advanced Reactivity Computers

    International Nuclear Information System (INIS)

    Petenyi, V.; Strmensky, C.; Jagrik, J.; Minarcin, M.; Sarvaic, I.

    2005-01-01

    The dynamic control assemblies worth measurement technique is a quick method for validation of predicted control assemblies worth. The dynamic control assemblies worth measurement utilize space-time corrections for the measured out of core ionization chamber readings calculated by DYN 3D computer code. The space-time correction arising from the prompt neutron density redistribution in the measured ionization chamber reading can be directly applied in the advanced reactivity computer. The second correction concerning the difference of spatial distribution of delayed neutrons can be calculated by simulation the measurement procedure by dynamic version of the DYN 3D code. In the paper some results of dynamic control assemblies worth measurement applied for NPP Mochovce are presented (Authors)

  4. On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials

    Science.gov (United States)

    Gates, Thomas S.

    2003-01-01

    A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.

  5. Technologic advances in aural rehabilitation: applications and innovative methods of service delivery.

    Science.gov (United States)

    Sweetow, Robert W; Sabes, Jennifer Henderson

    2007-06-01

    The level of interest in aural rehabilitation has increased recently, both in clinical use and in research presentations and publications. Advances in aural rehabilitation have seen previous techniques such as speech tracking and analytic auditory training reappear in computerized forms. These new delivery methods allow for a consistent, cost-effective, and convenient training program. Several computerized aural rehabilitation programs for hearing aid wearers and cochlear implant recipients have recently been developed and were reported on at the 2006 State of the Science Conference of the Rehabilitation Engineering Research Center on Hearing Enhancement at Gallaudet University. This article reviews these programs and outlines the similarities and differences in their design. Another promising area of aural rehabilitation research is the use of pharmaceuticals in the rehabilitation process. The results from a study of the effect of d-amphetamine in conjunction with intensive aural rehabilitation with cochlear implant patients are also described.

  6. Identification of advanced human factors engineering analysis, design and evaluation methods

    International Nuclear Information System (INIS)

    Plott, C.; Ronan, A. M.; Laux, L.; Bzostek, J.; Milanski, J.; Scheff, S.

    2006-01-01

    NUREG-0711 Rev.2, 'Human Factors Engineering Program Review Model,' provides comprehensive guidance to the Nuclear Regulatory Commission (NRC) in assessing the human factors practices employed by license applicants for Nuclear Power Plant control room designs. As software based human-system interface (HSI) technologies supplant traditional hardware-based technologies, the NRC may encounter new HSI technologies or seemingly unconventional approaches to human factors design, analysis, and evaluation methods which NUREG-0711 does not anticipate. A comprehensive survey was performed to identify advanced human factors engineering analysis, design and evaluation methods, tools, and technologies that the NRC may encounter in near term future licensee applications. A review was conducted to identify human factors methods, tools, and technologies relevant to each review element of NUREG-0711. Additionally emerging trends in technology which have the potential to impact review elements, such as Augmented Cognition, and various wireless tools and technologies were identified. The purpose of this paper is to provide an overview of the survey results and to highlight issues that could be revised or adapted to meet with emerging trends. (authors)

  7. Advances in complexity of beam halo-chaos and its control methods for beam transport networks

    International Nuclear Information System (INIS)

    Fang Jinqing

    2004-11-01

    The complexity theory of beam halo-chaos in beam transport networks and its control methods for a new subject of high-tech field is discussed. It is pointed that in recent years, there has been growing interest in proton beams of high power linear accelerator due to its attractive features in possible breakthrough applications in national defense and industry. In particular, high-current accelerator driven clean activity nuclear power systems for various applications as energy resources has been one of the most focusing issues in the current research, because it provides a safer, cleaner and cheaper nuclear energy resource. However, halo-chaos in high-current beam transport networks become a key concerned issue because it can generate excessive radioactivity therefore significantly limits its applications. It is very important to study the complexity properties of beam halo-chaos and to understand the basic physical mechanisms for halo chaos formation as well as to develop effective control methods for its suppression. These are very challenging subjects for the current research. The main research advances in the subjects, including experimental investigation and the oretical research, especially some very efficient control methods developed through many years of efforts of authors are reviewed and summarized. Finally, some research outlooks are given. (author)

  8. An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings

    Directory of Open Access Journals (Sweden)

    Jiun-Huei Jang

    2015-11-01

    Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.

  9. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method.

    Science.gov (United States)

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC.

  10. Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.

    Science.gov (United States)

    Clopper, Cynthia G

    2017-07-01

    Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.

  11. Determination of methylmercury in marine biota samples with advanced mercury analyzer: method validation.

    Science.gov (United States)

    Azemard, Sabine; Vassileva, Emilia

    2015-06-01

    In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Advanced methods on the evaluation of design earthquake motions for important power constructions

    International Nuclear Information System (INIS)

    Higashi, Sadanori; Shiba, Yoshiaki; Sato, Hiroaki; Sato, Yusuke; Nakajima, Masato; Sakai, Michiya; Sato, Kiyotaka

    2009-01-01

    In this report, we compiled advanced methods on the evaluation of design earthquake motions for important power constructions such as nuclear power, thermal power, and hydroelectric power facilities. For the nuclear and hydroelectric power facilities, we developed an inversion method of broad-band (0.1-5Hz) source process and obtained valid results from applying the method to the 2007 Niigata-ken Chuetsu-oki earthquake (M6.8). We have also improved our modeling techniques of thick sedimentary layered structure such as the S-wave velocity modeling by using microtremor array measurement and the frequency dependent damping factor with a lower limit. For seismic isolation design for nuclear power facilities, we proposed a design pseudo-velocity response spectrum. For the thermal power facilities, we performed three-dimensional numerical simulation of Kanto Basin for a prediction relation of long-period ground motion. We also proposed the introduction of probabilistic approach into the deterministic evaluation flow of design earthquake motions and evaluated the effect of a great earthquake with a short return period on the seismic hazard in Miyagi Prefecture, Japan. (author)

  13. An Advanced Bayesian Method for Short-Term Probabilistic Forecasting of the Generation of Wind Power

    Directory of Open Access Journals (Sweden)

    Antonio Bracale

    2015-09-01

    Full Text Available Currently, among renewable distributed generation systems, wind generators are receiving a great deal of interest due to the great economic, technological, and environmental incentives they involve. However, the uncertainties due to the intermittent nature of wind energy make it difficult to operate electrical power systems optimally and make decisions that satisfy the needs of all the stakeholders of the electricity energy market. Thus, there is increasing interest determining how to forecast wind power production accurately. Most the methods that have been published in the relevant literature provided deterministic forecasts even though great interest has been focused recently on probabilistic forecast methods. In this paper, an advanced probabilistic method is proposed for short-term forecasting of wind power production. A mixture of two Weibull distributions was used as a probability function to model the uncertainties associated with wind speed. Then, a Bayesian inference approach with a particularly-effective, autoregressive, integrated, moving-average model was used to determine the parameters of the mixture Weibull distribution. Numerical applications also are presented to provide evidence of the forecasting performance of the Bayesian-based approach.

  14. A Method for Consensus Reaching in Product Kansei Evaluation Using Advanced Particle Swarm Optimization.

    Science.gov (United States)

    Yang, Yan-Pu

    2017-01-01

    Consumers' opinions toward product design alternatives are often subjective and perceptual, which reflect their perception about a product and can be described using Kansei adjectives. Therefore, Kansei evaluation is often employed to determine consumers' preference. However, how to identify and improve the reliability of consumers' Kansei evaluation opinions toward design alternatives has an important role in adding additional insurance and reducing uncertainty to successful product design. To solve this problem, this study employs a consensus model to measure consistence among consumers' opinions, and an advanced particle swarm optimization (PSO) algorithm combined with Linearly Decreasing Inertia Weight (LDW) method is proposed for consensus reaching by minimizing adjustment of consumers' opinions. Furthermore, the process of the proposed method is presented and the details are illustrated using an example of electronic scooter design evaluation. The case study reveals that the proposed method is promising for reaching a consensus through searching optimal solutions by PSO and improving the reliability of consumers' evaluation opinions toward design alternatives according to Kansei indexes.

  15. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method

    Science.gov (United States)

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC. PMID:26401423

  16. Treatment of winery wastewater by electrochemical methods and advanced oxidation processes.

    Science.gov (United States)

    Orescanin, Visnja; Kollar, Robert; Nad, Karlo; Mikelic, Ivanka Lovrencic; Gustek, Stefica Findri

    2013-01-01

    The aim of this research was development of new system for the treatment of highly polluted wastewater (COD = 10240 mg/L; SS = 2860 mg/L) originating from vine-making industry. The system consisted of the main treatment that included electrochemical methods (electro oxidation, electrocoagulation using stainless steel, iron and aluminum electrode sets) with simultaneous sonication and recirculation in strong electromagnetic field. Ozonation combined with UV irradiation in the presence of added hydrogen peroxide was applied for the post-treatment of the effluent. Following the combined treatment, the final removal efficiencies of the parameters color, turbidity, suspended solids and phosphates were over 99%, Fe, Cu and ammonia approximately 98%, while the removal of COD and sulfates was 77% and 62%, respectively. A new approach combining electrochemical methods with ultrasound in the strong electromagnetic field resulted in significantly better removal efficiencies for majority of the measured parameters compared to the biological methods, advanced oxidation processes or electrocoagulation. Reduction of the treatment time represents another advantage of this new approach.

  17. Advanced neutron imaging methods with a potential to benefit from pulsed sources

    International Nuclear Information System (INIS)

    Strobl, M.; Kardjilov, N.; Hilger, A.; Penumadu, D.; Manke, I.

    2011-01-01

    During the last decade neutron imaging has seen significant improvements in instrumentation, detection and spatial resolution. Additionally, a variety of new applications and methods have been explored. As a consequence of an outstanding development nowadays various techniques of neutron imaging go far beyond a two- and three-dimensional mapping of the attenuation coefficients for a broad range of samples. Neutron imaging has become sensitive to neutron scattering in the small angle scattering range as well as with respect to Bragg scattering. Corresponding methods potentially provide spatially resolved and volumetric data revealing microstructural inhomogeneities, texture variations, crystalline phase distributions and even strains in bulk samples. Other techniques allow for the detection of refractive index distribution through phase sensitive measurements and the utilization of polarized neutrons enables radiographic and tomographic investigations of magnetic fields and properties as well as electrical currents within massive samples. All these advanced methods utilize or depend on wavelength dependent signals, and are hence suited to profit significantly from pulsed neutron sources as will be discussed.

  18. Advances in Spectral Nodal Methods applied to SN Nuclear Reactor Global calculations in Cartesian Geometry

    International Nuclear Information System (INIS)

    Barros, R.C.; Filho, H.A.; Oliveira, F.B.S.; Silva, F.C. da

    2004-01-01

    Presented here are the advances in spectral nodal methods for discrete ordinates (SN) eigenvalue problems in Cartesian geometry. These coarse-mesh methods are based on three ingredients: (i) the use of the standard discretized spatial balance SN equations; (ii) the use of the non-standard spectral diamond (SD) auxiliary equations in the multiplying regions of the domain, e.g. fuel assemblies; and (iii) the use of the non-standard spectral Green's function (SGF) auxiliary equations in the non-multiplying regions of the domain, e.g., the reflector. In slab-geometry the hybrid SD-SGF method generates numerical results that are completely free of spatial truncation errors. In X,Y-geometry, we obtain a system of two 'slab-geometry' SN equations for the node-edge average angular fluxes by transverse-integrating the X,Y-geometry SN equations separately in the y- and then in the x-directions within an arbitrary node of the spatial grid set up on the domain. In this paper, we approximate the transverse leakage terms by constants. These are the only approximations considered in the SD-SGF-constant nodal method, as the source terms, that include scattering and eventually fission events, are treated exactly. Moreover, we describe in this paper the progress of the approximate SN albedo boundary conditions for substituting the non-multiplying regions around the nuclear reactor core. We show numerical results to typical model problems to illustrate the accuracy of spectral nodal methods for coarse-mesh SN criticality calculations. (Author)

  19. Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    International Nuclear Information System (INIS)

    Katsaounis, T D

    2005-01-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall

  20. PREFACE: Advanced many-body and statistical methods in mesoscopic systems

    Science.gov (United States)

    Anghel, Dragos Victor; Sabin Delion, Doru; Sorin Paraoanu, Gheorghe

    2012-02-01

    It has increasingly been realized in recent times that the borders separating various subfields of physics are largely artificial. This is the case for nanoscale physics, physics of lower-dimensional systems and nuclear physics, where the advanced techniques of many-body theory developed in recent times could provide a unifying framework for these disciplines under the general name of mesoscopic physics. Other fields, such as quantum optics and quantum information, are increasingly using related methods. The 6-day conference 'Advanced many-body and statistical methods in mesoscopic systems' that took place in Constanta, Romania, between 27 June and 2 July 2011 was, we believe, a successful attempt at bridging an impressive list of topical research areas: foundations of quantum physics, equilibrium and non-equilibrium quantum statistics/fractional statistics, quantum transport, phases and phase transitions in mesoscopic systems/superfluidity and superconductivity, quantum electromechanical systems, quantum dissipation, dephasing, noise and decoherence, quantum information, spin systems and their dynamics, fundamental symmetries in mesoscopic systems, phase transitions, exactly solvable methods for mesoscopic systems, various extension of the random phase approximation, open quantum systems, clustering, decay and fission modes and systematic versus random behaviour of nuclear spectra. This event brought together participants from seventeen countries and five continents. Each of the participants brought considerable expertise in his/her field of research and, at the same time, was exposed to the newest results and methods coming from the other, seemingly remote, disciplines. The talks touched on subjects that are at the forefront of topical research areas and we hope that the resulting cross-fertilization of ideas will lead to new, interesting results from which everybody will benefit. We are grateful for the financial and organizational support from IFIN-HH, Ovidius

  1. Advances in exergy analysis: a novel assessment of the Extended Exergy Accounting method

    International Nuclear Information System (INIS)

    Rocco, M.V.; Colombo, E.; Sciubba, E.

    2014-01-01

    additional insight in and more relevant information for every comparative analysis of energy conversion systems, both at a global and a local level. In the paper, traditional and advanced exergy analysis methods are briefly discussed and EEA theoretical foundations and details for its application are described in detail. Methods: The method converts not only material and energy flows, but externalities as well (labour, capital and environmental costs) into flows of equivalent primary exergy, so that all exchanges between the system and the environment can be completely accounted for on a rigorous thermodynamic basis. The current emphasis decision makers and by public opinion alike seem to be placing on sustainability generates the need for continue research in the field of systems analysis, and a preliminary review confirms that exergy may constitute a coherent and rational basis for developing global and local analysis methods. Moreover, extended exergy accounting possesses some specific and peculiar characteristics that make it more suitable for life-cycle and cradle-to-grave (or well-to-wheel) applications. Results: Taxonomy for the classification of exergy-based methods is proposed. A novel assessment of the EEA method is provided, its advantages and drawbacks are discussed and areas in need of further theoretical investigation are identified. Conclusions: Since EEA is a life-cycle method, it is argued that it represents an improvement with regard to other current methods, in that it provides additional insight into the phenomenological aspects of any “energy conversion chain”. The paper demonstrates that the Extended Exergy cost function can be used within the traditional and very well formalized Thermoeconomic framework, replacing the economic cost function in order to evaluate and optimize the consumption of resources of a system in a more complete and rational way. Practical implications: This paper contains some specific proposals as to the further development

  2. Development of Advanced Life Cycle Costing Methods for Technology Benefit/Cost/Risk Assessment

    Science.gov (United States)

    Yackovetsky, Robert (Technical Monitor)

    2002-01-01

    The overall objective of this three-year grant is to provide NASA Langley's System Analysis Branch with improved affordability tools and methods based on probabilistic cost assessment techniques. In order to accomplish this objective, the Aerospace Systems Design Laboratory (ASDL) needs to pursue more detailed affordability, technology impact, and risk prediction methods and to demonstrate them on variety of advanced commercial transports. The affordability assessment, which is a cornerstone of ASDL methods, relies on the Aircraft Life Cycle Cost Analysis (ALCCA) program originally developed by NASA Ames Research Center and enhanced by ASDL. This grant proposed to improve ALCCA in support of the project objective by updating the research, design, test, and evaluation cost module, as well as the engine development cost module. Investigations into enhancements to ALCCA include improved engine development cost, process based costing, supportability cost, and system reliability with airline loss of revenue for system downtime. A probabilistic, stand-alone version of ALCCA/FLOPS will also be developed under this grant in order to capture the uncertainty involved in technology assessments. FLOPS (FLight Optimization System program) is an aircraft synthesis and sizing code developed by NASA Langley Research Center. This probabilistic version of the coupled program will be used within a Technology Impact Forecasting (TIF) method to determine what types of technologies would have to be infused in a system in order to meet customer requirements. A probabilistic analysis of the CER's (cost estimating relationships) within ALCCA will also be carried out under this contract in order to gain some insight as to the most influential costs and the impact that code fidelity could have on future RDS (Robust Design Simulation) studies.

  3. Training toward Advanced 3D Seismic Methods for CO2 Monitoring, Verification, and Accounting

    Energy Technology Data Exchange (ETDEWEB)

    Christopher Liner

    2012-05-31

    The objective of our work is graduate and undergraduate student training related to improved 3D seismic technology that addresses key challenges related to monitoring movement and containment of CO{sub 2}, specifically better quantification and sensitivity for mapping of caprock integrity, fractures, and other potential leakage pathways. We utilize data and results developed through previous DOE-funded CO{sub 2} characterization project (DE-FG26-06NT42734) at the Dickman Field of Ness County, KS. Dickman is a type locality for the geology that will be encountered for CO{sub 2} sequestration projects from northern Oklahoma across the U.S. midcontinent to Indiana and Illinois. Since its discovery in 1962, the Dickman Field has produced about 1.7 million barrels of oil from porous Mississippian carbonates with a small structural closure at about 4400 ft drilling depth. Project data includes 3.3 square miles of 3D seismic data, 142 wells, with log, some core, and oil/water production data available. Only two wells penetrate the deep saline aquifer. In a previous DOE-funded project, geological and seismic data were integrated to create a geological property model and a flow simulation grid. We believe that sequestration of CO{sub 2} will largely occur in areas of relatively flat geology and simple near surface, similar to Dickman. The challenge is not complex geology, but development of improved, lower-cost methods for detecting natural fractures and subtle faults. Our project used numerical simulation to test methods of gathering multicomponent, full azimuth data ideal for this purpose. Our specific objectives were to apply advanced seismic methods to aide in quantifying reservoir properties and lateral continuity of CO{sub 2} sequestration targets. The purpose of the current project is graduate and undergraduate student training related to improved 3D seismic technology that addresses key challenges related to monitoring movement and containment of CO{sub 2

  4. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    Science.gov (United States)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  5. Application of advanced methods for the prognosis of production energy consumption

    International Nuclear Information System (INIS)

    Stetter, R; Witczak, P; Spindler, C; Hertel, J; Staiger, B

    2014-01-01

    This paper, based on a current research project, describes the application of advanced methods that are frequently used in fault-tolerance control and addresses the issue of the prognosis of energy efficiency. Today, the energy a product requires during its operation is the subject of many activities in research and development. However, the energy necessary for the production of goods is very often not analysed in comparable depth. In the field of electronics, studies come to the conclusion that about 80% of the total energy used by a product is from its production [1]. The energy consumption in production is determined very early in the product development process by designers and engineers, for example through selection of raw materials, explicit and implicit requirements concerning the manufacturing and assembly processes, or through decisions concerning the product architecture. Today, developers and engineers have at their disposal manifold design and simulation tools which can help to predict the energy consumption during operation relatively accurately. In contrast, tools with the objective to predict the energy consumption in production and disposal are not available. This paper aims to present an explorative study of the use of methods such as Fuzzy Logic to predict the production energy consumption early in the product development process

  6. Hydrogen production methods efficiency coupled to an advanced high temperature accelerator driven system

    International Nuclear Information System (INIS)

    Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira

    2017-01-01

    The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)

  7. Hydrogen production methods efficiency coupled to an advanced high temperature accelerator driven system

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Fernández, Carlos García, E-mail: danielgonro@gmail.com, E-mail: mmhamada@ipen.br [Instituto Superior de Tecnologías y Ciencias aplicadas (InSTEC), La Habana (Cuba)

    2017-07-01

    The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)

  8. NATO Advanced Research Workshop on Methods and Mechanisms for Producing Ions from Large Molecules

    CERN Document Server

    Ens, Werner

    1991-01-01

    A NATO Advanced Research Workshop on Methods and Mechanisms for Producing Ions from Large Molecules was held at Minaki Lodge, Minaki, Ontario, Canada, from 24 to 28 June 1990. The workshop was hosted by the time-of-flight group of the Department of Physics at the University of Manitoba, and was attended by 64 invited participants from around the world. Twenty-nine invited talks were given and 19 papers were presented as posters. Of the 48 contributions, 38 are included in these proceedings. The conference was organized to study the rapidly changing field of mass spectrometry of biomolecules. Particle-induced desorption (especially with MeV particles) has been the most effective method of producing molecular ions from biomolecules. An important part of the workshop was devoted to recent developments in this field, particularly to progress in understanding the fundamentals of the desorption process. In this respect, the meeting was similar to previous conferences in Marburg, FRG (1978); Paris, F (1980); Uppsala...

  9. ARN Training on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Rojo, A.M.; Gomez Parada, I.; Puerta Yepes, N.; Gossio, S.

    2010-01-01

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. This paper resumes the main characteristics of this activity. (authors) [es

  10. Malignant gliomas: current perspectives in diagnosis, treatment, and early response assessment using advanced quantitative imaging methods

    Directory of Open Access Journals (Sweden)

    Ahmed R

    2014-03-01

    Full Text Available Rafay Ahmed,1 Matthew J Oborski,2 Misun Hwang,1 Frank S Lieberman,3 James M Mountz11Department of Radiology, 2Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; 3Department of Neurology and Department of Medicine, Division of Hematology/Oncology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USAAbstract: Malignant gliomas consist of glioblastomas, anaplastic astrocytomas, anaplastic oligodendrogliomas and anaplastic oligoastrocytomas, and some less common tumors such as anaplastic ependymomas and anaplastic gangliogliomas. Malignant gliomas have high morbidity and mortality. Even with optimal treatment, median survival is only 12–15 months for glioblastomas and 2–5 years for anaplastic gliomas. However, recent advances in imaging and quantitative analysis of image data have led to earlier diagnosis of tumors and tumor response to therapy, providing oncologists with a greater time window for therapy management. In addition, improved understanding of tumor biology, genetics, and resistance mechanisms has enhanced surgical techniques, chemotherapy methods, and radiotherapy administration. After proper diagnosis and institution of appropriate therapy, there is now a vital need for quantitative methods that can sensitively detect malignant glioma response to therapy at early follow-up times, when changes in management of nonresponders can have its greatest effect. Currently, response is largely evaluated by measuring magnetic resonance contrast and size change, but this approach does not take into account the key biologic steps that precede tumor size reduction. Molecular imaging is ideally suited to measuring early response by quantifying cellular metabolism, proliferation, and apoptosis, activities altered early in treatment. We expect that successful integration of quantitative imaging biomarker assessment into the early phase of clinical trials could provide a novel approach for testing new therapies

  11. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    Science.gov (United States)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  12. STEEP STREAMS - Solid Transport Evaluation and Efficiency in Prevention: Sustainable Techniques of Rational Engineering and Advanced MethodS

    Science.gov (United States)

    Armanini, Aronne; Cardoso, Antonio H.; Di Baldassarre, Giuliano; Bellin, Alberto; Breinl, Korbinian; Canelas, Ricardo B.; Larcher, Michele; Majone, Bruno; Matos, Jorges; Meninno, Sabrina; Nucci, Elena; Rigon, Riccardo; Rosatti, Giorgio; Zardi, Dino

    2017-04-01

    The STEEP STREAMS (Solid Transport Evaluation and Efficiency in Prevention: Sustainable Techniques of Rational Engineering and Advanced MethodS) project consists of a collaboration among the Universities of Trento, Uppsala and Lisbon, who joined in a consortium within the ERANET Water JPI call WaterWorks2014. The aim of the project is to produce new rational criteria for the design of protection works against debris flows, a phenomenon consisting in hyper-concentrated flows of water and sediments, classified as catastrophic events typical of small mountainous basins (area triggered by intense rainstorms. Such events are non-stationary phenomena that arise in a very short time, and their recurrence is rather difficult to determine. Compared to flash floods, they are more difficult to anticipate, mostly since they are triggered by convective precipitation events, posing a higher risk of damage and even loss of human lives. These extreme events occur almost annually across Europe, though the formal return period in an exposed site is much larger. Recently, an increase in intensity and frequency of small-scale storm events, leading to extreme solid transport in steep channels, are recognized as one of the effects of climate change. In this context, one of the key challenges of this project is the use of comparatively coarse RCM projections to the small catchments examined in STEEP STREAMS. Given these changes, conventional protection works and their design criteria may not suffice to provide adequate levels of protection to human life and urban settlements. These structures create a storage area upstream the alluvial fans and the settlements, thereby reducing the need of channelization in areas often constrained by urban regulations. To optimize the lamination, and in particular to reduce the peak of solid mass flux, it is necessary that the deposition basin is controlled by a slit check dam, capable of inducing a controlled sedimentation of the solid mas flux. In

  13. An Advanced Electrospinning Method of Fabricating Nanofibrous Patterned Architectures with Controlled Deposition and Desired Alignment

    Science.gov (United States)

    Rasel, Sheikh Md

    We introduce a versatile advanced method of electrospinning for fabricating various kinds of nanofibrous patterns along with desired alignment, controlled amount of deposition and locally variable density into the architectures. In this method, we employed multiple electrodes whose potentials have been altered in milliseconds with the help of microprocessor based control system. Therefore, key success of this method was that the electrical field as well as charge carrying fibers could be switched shortly from one electrode's location to another, as a result, electrospun fibers could be deposited on the designated areas with desired alignment. A wide range of nanofibrous patterned architectures were constructed using proper arrangement of multiple electrodes. By controlling the concurrent activation time of two adjacent electrodes, we demonstrated that amount of fibers going into the pattern can be adjusted and desired alignment in electrospun fibers can be obtained. We also revealed that the deposition density of electrospun fibers in different areas of patterned architectures can be varied. We showed that by controlling the deposition time between two adjacent electrodes, a number of functionally graded patterns can be generated with uniaxial alignment. We also demonstrated that this handy method was capable of producing random, aligned, and multidirectional nanofibrous mats by engaging a number of electrodes and switching them in desired patterns. A comprehensive study using finite element method was carried out to understand the effects of electrical field. Simulation results revealed that electrical field strength alters shortly based on electrode control switch patterns. Nanofibrous polyvinyl alcohol (PVA) scaffolds and its composite reinforced with wollastonite and wood flour were fabricated using rotating drum electrospinning technique. Morphological, mechanical, and thermal, properties were characterized on PVA/wollastonite and PVA/wood flour nanocomposites

  14. Limitations of the Conventional Phase Advance Method for Constant Power Operation of the Brushless DC Motor

    International Nuclear Information System (INIS)

    Lawler, J.S.

    2001-01-01

    The brushless dc motor (BDCM) has high-power density and efficiency relative to other motor types. These properties make the BDCM well suited for applications in electric vehicles provided a method can be developed for driving the motor over the 4 to 6:1 constant power speed range (CPSR) required by such applications. The present state of the art for constant power operation of the BDCM is conventional phase advance (CPA)[1]. In this paper, we identify key limitations of CPA. It is shown that the CPA has effective control over the developed power but that the current magnitude is relatively insensitive to power output and is inversely proportional to motor inductance. If the motor inductance is low, then the rms current at rated power and high speed may be several times larger than the current rating. The inductance required to maintain rms current within rating is derived analytically and is found to be large relative to that of BDCM designs using high-strength rare earth magnets. Th us, the CPA requires a BDCM with a large equivalent inductance

  15. Evaluation of DNBR calculation methods for advanced digital core protection system

    International Nuclear Information System (INIS)

    Ihn, W. K.; Hwang, D. H.; Pak, Y. H.; Yoon, T. Y.

    2003-01-01

    This study evaluated the on-line DNBR calculation methods for an advanced digital core protection system in PWR, i.e., subchannel analysis and group-channel analysis. The subchannel code MATRA and the four-channel codes CETOP-D and CETOP2 were used here. CETOP2 is most simplified DNBR analysis code which is implemented in core protection calculator in Korea standard nuclear power plants. The detailed subchannel code TORC was used as a reference calculation of DNBR. The DNBR uncertainty and margin were compared using allowable operating conditions at Yonggwang nuclear units 3-4. The MATRA code using a nine lumping-channel model resulted in smaller mean and larger standard deviation of the DNBR error distribution. CETOP-D and CETOP2 showed conservatively biased mean and relatively smaller standard deviation of the DNBR error distribution. MATRA and CETOP-D w.r.t CETOP2 showed significant increase of the DNBR available margin at normal operating condition. Taking account for the DNBR uncertainty, MATRA and CETOP-D over CETOP2 were estimated to increase the DNBR net margin by 2.5%-9.8% and 2.5%-3.3%, respectively

  16. REVA Advanced Fuel Design and Codes and Methods - Increasing Reliability, Operating Margin and Efficiency in Operation

    Energy Technology Data Exchange (ETDEWEB)

    Frichet, A.; Mollard, P.; Gentet, G.; Lippert, H. J.; Curva-Tivig, F.; Cole, S.; Garner, N.

    2014-07-01

    Since three decades, AREVA has been incrementally implementing upgrades in the BWR and PWR Fuel design and codes and methods leading to an ever greater fuel efficiency and easier licensing. For PWRs, AREVA is implementing upgraded versions of its HTP{sup T}M and AFA 3G technologies called HTP{sup T}M-I and AFA3G-I. These fuel assemblies feature improved robustness and dimensional stability through the ultimate optimization of their hold down system, the use of Q12, the AREVA advanced quaternary alloy for guide tube, the increase in their wall thickness and the stiffening of the spacer to guide tube connection. But an even bigger step forward has been achieved a s AREVA has successfully developed and introduces to the market the GAIA product which maintains the resistance to grid to rod fretting (GTRF) of the HTP{sup T}M product while providing addition al thermal-hydraulic margin and high resistance to Fuel Assembly bow. (Author)

  17. Advances in estimation methods of vegetation water content based on optical remote sensing techniques

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.

  18. Hierarchical Coupling of First-Principles Molecular Dynamics with Advanced Sampling Methods.

    Science.gov (United States)

    Sevgen, Emre; Giberti, Federico; Sidky, Hythem; Whitmer, Jonathan K; Galli, Giulia; Gygi, Francois; de Pablo, Juan J

    2018-05-14

    We present a seamless coupling of a suite of codes designed to perform advanced sampling simulations, with a first-principles molecular dynamics (MD) engine. As an illustrative example, we discuss results for the free energy and potential surfaces of the alanine dipeptide obtained using both local and hybrid density functionals (DFT), and we compare them with those of a widely used classical force field, Amber99sb. In our calculations, the efficiency of first-principles MD using hybrid functionals is augmented by hierarchical sampling, where hybrid free energy calculations are initiated using estimates obtained with local functionals. We find that the free energy surfaces obtained from classical and first-principles calculations differ. Compared to DFT results, the classical force field overestimates the internal energy contribution of high free energy states, and it underestimates the entropic contribution along the entire free energy profile. Using the string method, we illustrate how these differences lead to different transition pathways connecting the metastable minima of the alanine dipeptide. In larger peptides, those differences would lead to qualitatively different results for the equilibrium structure and conformation of these molecules.

  19. Simulation and design method in advanced nanomaterials fine-tuning for some perovskites type AHE study

    International Nuclear Information System (INIS)

    Mohorianu, S.; Lozovan, M.; Rusu, F.-V.

    2009-01-01

    Nanostructured materials with tailored properties are now essential for future applications in the current industrial manufacturing. Extracting valuable information from data by using the distributed computer processing and storage technologies, as well the Artificial Neural Network (ANN) and the development of advanced algorithms for knowledge discovery are the purpose of our work. We describe how a Simulation and Design Method (SDM) attempt, based on our last results, is applied on two perovskites type materials, La 0.7 Ca 0.3 MnO 3 and La 0.7 Sr 0.3 MnO 3 in order to study the Anomalous Hall Effect (AHE). Our new ANN model, is intended to contribute to the effort to improve some properties of new materials. It implements and uses the basic building blocks of neural computation, such as multi-layer perceptrons. ANN can learn associative patterns and approximate the functional relationship between a set of input and output. Modeling and simulation techniques affect all stages in the development and improvement of new materials, from the initial formation of concepts to synthesis and characterization of properties. A new SDM with ANN for some nanomagnetic materials was given. Neural networks have been applied successfully in the identification and classification of some nanomagnetic characteristics from a large amount of data. (authors)

  20. Advancing Higher Education with Mobile Learning Technologies: Cases, Trends, and Inquiry-Based Methods

    Science.gov (United States)

    Keengwe, Jared, Ed.; Maxfield, Marian B., Ed.

    2015-01-01

    Rapid advancements in technology are creating new opportunities for educators to enhance their classroom techniques with digital learning resources. Once used solely outside of the classroom, smartphones, tablets, and e-readers are becoming common in many school settings. "Advancing Higher Education with Mobile Learning Technologies: Cases,…

  1. Advanced numerical methods for uncertainty reduction when predicting heat exchanger dynamic stability limits: Review and perspectives

    International Nuclear Information System (INIS)

    Longatte, E.; Baj, F.; Hoarau, Y.; Braza, M.; Ruiz, D.; Canteneur, C.

    2013-01-01

    Highlights: ► Proposal of hybrid computational methods for investigating dynamical system stability. ► Modeling turbulence disequilibrium due to interaction with moving solid boundaries. ► Providing computational procedure for large size system solution approximation through model reduction. -- Abstract: This article proposes a review of recent and current developments in the modeling and advanced numerical methods used to simulate large-size systems involving multi-physics in the field of mechanics. It addresses the complex issue of stability analysis of dynamical systems submitted to external turbulent flows and aims to establish accurate stability maps applicable to heat exchanger design. The purpose is to provide dimensionless stability limit modeling that is suitable for a variety of configurations and is as accurate as possible in spite of the large scale of the systems to be considered. The challenge lies in predicting local effects that may impact global systems. A combination of several strategies that are suited concurrently to multi-physics, multi-scale and large-size system computation is therefore required. Based on empirical concepts, the heuristic models currently used in the framework of standard stability analysis suffer from a lack of predictive capabilities. On the other hand, numerical approaches based on fully-coupled fluid–solid dynamics system computation remain expensive due to the multi-physics patterns of physics and the large number of degrees of freedom involved. In this context, since experimentation cannot be achieved and numerical simulation is unavoidable but prohibitive, a hybrid strategy is proposed in order to take advantage of both numerical local solutions and empirical global solutions

  2. Quantifying export flows of used electronics: advanced methods to resolve used goods within trade data.

    Science.gov (United States)

    Duan, Huabo; Miller, T Reed; Gregory, Jeremy; Kirchain, Randolph

    2014-03-18

    There is limited convincing quantitative data on the export of used electronics from the United States (U.S.). Thus, we advance a methodology to quantify the export flows of whole units of used electronics from the U.S. using detailed export trade data, and demonstrate the methodology using laptops. Since used electronics are not explicitly identified in export trade data, we hypothesize that exports with a low unit value below a used-new threshold specific to a destination world region are used. The importance of using the most disaggregated trade data set available when resolving used and new goods is illustrated. Two detailed U.S. export trade data sets were combined to arrive at quantities and unit values for each port, mode of transport, month, trade partner country, and trade code. We add rigor to the determination of the used-new threshold by utilizing both the Neighborhood valley-emphasis method (NVEM) and published sales prices. This analysis found that 748 to 1199 thousand units of used laptops were exported from the U.S. in 2010, of which 78-81% are destined for non-OECD countries. Asia was found to be the largest destination of used laptop exports across all used-new threshold methods. Latin American and the Caribbean was the second largest recipient of these exports. North America and Europe also received used laptops from the U.S. Only a small fraction of used laptops was exported to Africa. However, these quantities are lower bound estimates because not all shipments of used laptops may be shipped using the proper laptop trade code. Still, this approach has the potential to give insight into the quantity and destinations of the exports if applied to all used electronics product types across a series of years.

  3. Response monitoring using quantitative ultrasound methods and supervised dictionary learning in locally advanced breast cancer

    Science.gov (United States)

    Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.

    2016-03-01

    A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.

  4. Advances in the discrete ordinates and finite volume methods for the solution of radiative heat transfer problems in participating media

    International Nuclear Information System (INIS)

    Coelho, Pedro J.

    2014-01-01

    Many methods are available for the solution of radiative heat transfer problems in participating media. Among these, the discrete ordinates method (DOM) and the finite volume method (FVM) are among the most widely used ones. They provide a good compromise between accuracy and computational requirements, and they are relatively easy to integrate in CFD codes. This paper surveys recent advances on these numerical methods. Developments concerning the grid structure (e.g., new formulations for axisymmetrical geometries, body-fitted structured and unstructured meshes, embedded boundaries, multi-block grids, local grid refinement), the spatial discretization scheme, and the angular discretization scheme are described. Progress related to the solution accuracy, solution algorithm, alternative formulations, such as the modified DOM and FVM, even-parity formulation, discrete-ordinates interpolation method and method of lines, and parallelization strategies is addressed. The application to non-gray media, variable refractive index media, and transient problems is also reviewed. - Highlights: • We survey recent advances in the discrete ordinates and finite volume methods. • Developments in spatial and angular discretization schemes are described. • Progress in solution algorithms and parallelization methods is reviewed. • Advances in the transient solution of the radiative transfer equation are appraised. • Non-gray media and variable refractive index media are briefly addressed

  5. Advanced Optical Diagnostic Methods for Describing Fuel Injection and Combustion Flowfield Phenomena

    Science.gov (United States)

    Locke, Randy J.; Hicks, Yolanda R.; Anderson, Robert C.

    2004-01-01

    Over the past decade advanced optical diagnostic techniques have evolved and matured to a point where they are now widely applied in the interrogation of high pressure combusting flows. At NASA Glenn Research Center (GRC), imaging techniques have been used successfully in on-going work to develop the next generation of commercial aircraft gas turbine combustors. This work has centered on providing a means by which researchers and designers can obtain direct visual observation and measurements of the fuel injection/mixing/combustion processes and combustor flowfield in two- and three-dimensional views at actual operational conditions. Obtaining a thorough understanding of the chemical and physical processes at the extreme operating conditions of the next generation of combustors is critical to reducing emissions and increasing fuel efficiency. To accomplish this and other tasks, the diagnostic team at GRC has designed and constructed optically accessible, high pressurer high temperature flame tubes and sectar rigs capable of optically probing the 20-60 atm flowfields of these aero-combustors. Among the techniques employed at GRC are planar laser-induced fluorescence (PLIF) for imaging molecular species as well as liquid and gaseous fuel; planar light scattering (PLS) for imaging fuel sprays and droplets; and spontaneous Raman scattering for species and temperature measurement. Using these techniques, optical measurements never before possible have been made in the actual environments of liquid fueled gas turbines. 2-D mapping of such parameters as species (e.g. OH-, NO and kerosene-based jet fuel) distribution, injector spray angle, and fuel/air distribution are just some of the measurements that are now routinely made. Optical imaging has also provided prompt feedback to researchers regarding the effects of changes in the fuel injector configuration on both combustor performance and flowfield character. Several injector design modifications and improvements have

  6. Advanced Instrumentation and Control Methods for Small and Medium Reactors with IRIS Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    J. Wesley Hines; Belle R. Upadhyaya; J. Michael Doster; Robert M. Edwards; Kenneth D. Lewis; Paul Turinsky; Jamie Coble

    2011-05-31

    Development and deployment of small-scale nuclear power reactors and their maintenance, monitoring, and control are part of the mission under the Small Modular Reactor (SMR) program. The objectives of this NERI-consortium research project are to investigate, develop, and validate advanced methods for sensing, controlling, monitoring, diagnosis, and prognosis of these reactors, and to demonstrate the methods with application to one of the proposed integral pressurized water reactors (IPWR). For this project, the IPWR design by Westinghouse, the International Reactor Secure and Innovative (IRIS), has been used to demonstrate the techniques developed under this project. The research focuses on three topical areas with the following objectives. Objective 1 - Develop and apply simulation capabilities and sensitivity/uncertainty analysis methods to address sensor deployment analysis and small grid stability issues. Objective 2 - Develop and test an autonomous and fault-tolerant control architecture and apply to the IRIS system and an experimental flow control loop, with extensions to multiple reactor modules, nuclear desalination, and optimal sensor placement strategy. Objective 3 - Develop and test an integrated monitoring, diagnosis, and prognosis system for SMRs using the IRIS as a test platform, and integrate process and equipment monitoring (PEM) and process and equipment prognostics (PEP) toolboxes. The research tasks are focused on meeting the unique needs of reactors that may be deployed to remote locations or to developing countries with limited support infrastructure. These applications will require smaller, robust reactor designs with advanced technologies for sensors, instrumentation, and control. An excellent overview of SMRs is described in an article by Ingersoll (2009). The article refers to these as deliberately small reactors. Most of these have modular characteristics, with multiple units deployed at the same plant site. Additionally, the topics focus

  7. Analyzing Planck and low redshift data sets with advanced statistical methods

    Science.gov (United States)

    Eifler, Tim

    The recent ESA/NASA Planck mission has provided a key data set to constrain cosmology that is most sensitive to physics of the early Universe, such as inflation and primordial NonGaussianity (Planck 2015 results XIII). In combination with cosmological probes of the LargeScale Structure (LSS), the Planck data set is a powerful source of information to investigate late time phenomena (Planck 2015 results XIV), e.g. the accelerated expansion of the Universe, the impact of baryonic physics on the growth of structure, and the alignment of galaxies in their dark matter halos. It is the main objective of this proposal to re-analyze the archival Planck data, 1) with different, more recently developed statistical methods for cosmological parameter inference, and 2) to combine Planck and ground-based observations in an innovative way. We will make the corresponding analysis framework publicly available and believe that it will set a new standard for future CMB-LSS analyses. Advanced statistical methods, such as the Gibbs sampler (Jewell et al 2004, Wandelt et al 2004) have been critical in the analysis of Planck data. More recently, Approximate Bayesian Computation (ABC, see Weyant et al 2012, Akeret et al 2015, Ishida et al 2015, for cosmological applications) has matured to an interesting tool in cosmological likelihood analyses. It circumvents several assumptions that enter the standard Planck (and most LSS) likelihood analyses, most importantly, the assumption that the functional form of the likelihood of the CMB observables is a multivariate Gaussian. Beyond applying new statistical methods to Planck data in order to cross-check and validate existing constraints, we plan to combine Planck and DES data in a new and innovative way and run multi-probe likelihood analyses of CMB and LSS observables. The complexity of multiprobe likelihood analyses scale (non-linearly) with the level of correlations amongst the individual probes that are included. For the multi

  8. Smart Sensing of the Aux. Feed-water Pump Performance in NPP Severe Accidents Using Advanced GMDH Method

    Energy Technology Data Exchange (ETDEWEB)

    No, Young Gyu; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    In order to develop and verify the models, a number of data obtained by simulating station black out (SBO) scenario for the optimized power reactor 1000 (OPR1000) using MARS code were used. Most of monitoring systems for component have been suggested by using the directly measured data. However, it is very difficult to acquire data related to safety-critical component' status. Therefore, it is necessary to develop the new method that combines the data-based equipped with learning system and data miming technique. Many data-based modeling methods have been applied successfully to nuclear engineering area, such as signal validation, plant diagnostics and event identification. Also, the data miming is the process of analyzing data from different perspectives and summarizing it into useful information. In this study, the smart sensing technique was developed using advanced group method of data handing (GMDH) model. The original GMDH is an inductive self organizing algebraic model. The advanced GMDH model is equipped with a fuzzy concept. The proposed advanced GMDH model enhances the original GMDH model by reducing the effect of outliers and noise. The advanced GMDH uses different weightings according to their importance which is specified by the fuzzy membership grade. The developed model was verified using SBO accident simulation data for the OPR1000 nuclear power plant acquired with MARS code. Also, the advanced GMDH model was trained using the simulated development data and verified with simulated test data. The development and test data sets were independent. The simulation results show that the performance of the developed advanced GMDH model was very satisfactory, as shown in Table 1. Therefore, if the developed model can be optimized using diverse and specific data, it will be possible to predict the performance of Aux. feed water pump accurately.

  9. Advances in methods of commercial FBR core characteristics analyses. Investigations of a treatment of the double-heterogeneity and a method to calculate homogenized control rod cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Sugino, Kazuteru [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Iwai, Takehiko

    1998-07-01

    A standard data base for FBR core nuclear design is under development in order to improve the accuracy of FBR design calculation. As a part of the development, we investigated an improved treatment of double-heterogeneity and a method to calculate homogenized control rod cross sections in a commercial reactor geometry, for the betterment of the analytical accuracy of commercial FBR core characteristics. As an improvement in the treatment of double-heterogeneity, we derived a new method (the direct method) and compared both this and conventional methods with continuous energy Monte-Carlo calculations. In addition, we investigated the applicability of the reaction rate ratio preservation method as a advanced method to calculate homogenized control rod cross sections. The present studies gave the following information: (1) An improved treatment of double-heterogeneity: for criticality the conventional method showed good agreement with Monte-Carlo result within one sigma standard deviation; the direct method was consistent with conventional one. Preliminary evaluation of effects in core characteristics other than criticality showed that the effect of sodium void reactivity (coolant reactivity) due to the double-heterogeneity was large. (2) An advanced method to calculate homogenize control rod cross sections: for control rod worths the reaction rate ratio preservation method agreed with those produced by the calculations with the control rod heterogeneity included in the core geometry; in Monju control rod worth analysis, the present method overestimated control rod worths by 1 to 2% compared with the conventional method, but these differences were caused by more accurate model in the present method and it is considered that this method is more reliable than the conventional one. These two methods investigated in this study can be directly applied to core characteristics other than criticality or control rod worth. Thus it is concluded that these methods will

  10. Methods of validating the Advanced Diagnosis and Warning system for aircraft ICing Environments (ADWICE)

    Science.gov (United States)

    Rosczyk, S.; Hauf, T.; Leifeld, C.

    2003-04-01

    In-flight icing is one of the most hazardous problems in aviation. It was determined as contributing factor in more than 800 incidents worldwide. And though the meteorological factors of airframe icing become more and more transparent, they have to be integrated into the Federal Aviation Administration's (FAA) certification rules first. Therefore best way to enhance aviational safety is to know the areas of dangerous icing conditions in order to prevent flying in them. For this reason the German Weather Service (DWD), the Institute for Atmospheric Physics at the German Aerospace Centre (DLR) and the Institute of Meteorology and Climatology (ImuK) of the University of Hanover started developingADWICE - theAdvanced Diagnosis and Warning system for aircraft ICing Environments - in 1998. This algorithm is based on the DWDLocal Model (LM) forecast of temperature and humidity, in fusion with radar and synop and, coming soon, satellite data. It gives an every-hour nowcast of icing severity and type - divided into four categories: freezing rain, convective, stratiform and general - for the middle European area. A first validation of ADWICE took place in 1999 with observational data from an in-flight icing campaign during EURICE in 1997. The momentary validation deals with a broader database. As first step the output from ADWICE is compared to observations from pilots (PIREPs) to get a statistic of the probability of detecting icing and either no-icing conditions within the last icing-seasons. There were good results of this method with the AmericanIntegrated Icing Diagnostic Algorithm (IIDA). A problem though is the small number of PIREPs from Europe in comparison to the US. So a temporary campaign of pilots (including Lufthansa and Aerolloyd) collecting cloud and icing information every few miles is intended to solve this unpleasant situation. Another source of data are the measurements of theFalcon - a DLR research aircraft carrying an icing sensor. In addition to that

  11. FAA/NASA International Symposium on Advanced Structural Integrity Methods for Airframe Durability and Damage Tolerance, part 2

    Energy Technology Data Exchange (ETDEWEB)

    Harris, C.E.

    1994-09-01

    The international technical experts in the areas of durability and damage tolerance of metallic airframe structures were assembled to present and discuss recent research findings and the development of advanced design and analysis methods, structural concepts, and advanced materials. The principal focus of the symposium was on the dissemination of new knowledge and the peer-review of progress on the development of advanced methodologies. Papers were presented on the following topics: structural concepts for enhanced durability, damage tolerance, and maintainability; new metallic alloys and processing technology; fatigue crack initiation and small crack effects; fatigue crack growth models; fracture mechanics failure criteria for ductile materials; structural mechanics methodology for residual strength and life prediction; development of flight load spectra for design and testing; and corrosion resistance. Separate articles from this report have been indexed into the database.

  12. FAA/NASA International Symposium on Advanced Structural Integrity Methods for Airframe Durability and Damage Tolerance, part 2

    Science.gov (United States)

    Harris, Charles E. (Editor)

    1994-01-01

    The international technical experts in the areas of durability and damage tolerance of metallic airframe structures were assembled to present and discuss recent research findings and the development of advanced design and analysis methods, structural concepts, and advanced materials. The principal focus of the symposium was on the dissemination of new knowledge and the peer-review of progress on the development of advanced methodologies. Papers were presented on the following topics: structural concepts for enhanced durability, damage tolerance, and maintainability; new metallic alloys and processing technology; fatigue crack initiation and small crack effects; fatigue crack growth models; fracture mechanics failure criteria for ductile materials; structural mechanics methodology for residual strength and life prediction; development of flight load spectra for design and testing; and corrosion resistance.

  13. Utilizing Advanced Psychometric Methods in Research on Trait Expression across Situations

    DEFF Research Database (Denmark)

    Lang, Jonas W. B.; Tackett, Jennifer; Zettler, Ingo

    2017-01-01

    Lievens emphasized the extent to which new measurement tools and additional statistics can be used to advance research on trait expression across situations. We suggest that advanced psychometric models represent additional important and complementary building blocks for progress and new insights...... in research on trait expression across situations. Here, we offer two specific examples of this: (1) Item Response Theory modelling of within-person variability simultaneous with estimation of latent trait levels and (2) estimation of latent trait and latent situation factors from a multitrait...

  14. A Proposal on the Advanced Sampling Based Sensitivity and Uncertainty Analysis Method for the Eigenvalue Uncertainty Analysis

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man

    2014-01-01

    In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory

  15. The Effects of Using Advance Organizers on Improving EFL Learners' Listening Comprehension: A Mixed Method Study

    Science.gov (United States)

    Jafari, Khadijeh; Hashim, Fatimah

    2012-01-01

    This study investigated the effects of using two types of written advance organizers, key sentences and key vocabulary, on the improvement of EFL learners' listening comprehension. 108 second year university students at the higher and lower listening proficiency levels were randomly assigned to one control group and two experimental groups. Prior…

  16. Advances in methods for identification and characterization of plant transporter function

    DEFF Research Database (Denmark)

    Larsen, Bo; Xu, Deyang; Halkier, Barbara Ann

    2017-01-01

    Transport proteins are crucial for cellular function at all levels. Numerous importers and exporters facilitate transport of a diverse array of metabolites and ions intra- and intercellularly. Identification of transporter function is essential for understanding biological processes at both......-based approaches. In this review, we highlight examples that illustrate how new technology and tools have advanced identification and characterization of plant transporter functions....

  17. Advanced fire-resistant forms of activated carbon and methods of adsorbing and separating gases using same

    Science.gov (United States)

    Xiong, Yongliang; Wang, Yifeng

    2015-02-03

    Advanced, fire-resistant activated carbon compositions useful in adsorbing gases; and having vastly improved fire resistance are provided, and methods for synthesizing the compositions are also provided. The advanced compositions have high gas adsorption capacities and rapid adsorption kinetics (comparable to commercially-available activated carbon), without having any intrinsic fire hazard. They also have superior performance to Mordenites in both adsorption capacities and kinetics. In addition, the advanced compositions do not pose the fibrous inhalation hazard that exists with use of Mordenites. The fire-resistant compositions combine activated carbon mixed with one or more hydrated and/or carbonate-containing minerals that release H.sub.2O and/or CO.sub.2 when heated. This effect raises the spontaneous ignition temperature to over 500.degree. C. in most examples, and over 800.degree. C. in some examples. Also provided are methods for removing and/or separating target gases, such as Krypton or Argon, from a gas stream by using such advanced activated carbons.

  18. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Williams, Benjamin B.; Swartz, Harold M.; Nicolalde, Roberto J.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100-1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. (authors)

  19. Optimization of advanced gas-cooled reactor fuel performance by a stochastic method

    International Nuclear Information System (INIS)

    Parks, G.T.

    1987-01-01

    A brief description is presented of a model representing the in-core behaviour of a single advanced gas-cooled reactor fuel channel, developed specifically for optimization studies. The performances of the only suitable Numerical Algorithms Group (NAG) library package and a Metropolis algorithm routine on this problem are discussed and contrasted. It is concluded that, for the problem in question, the stochastic Metropolis algorithm has distinct advantages over the deterministic NAG routine. (author)

  20. [Advance in the methods of preimplantation genetic diagnosis for single gene diseases].

    Science.gov (United States)

    Ren, Yixin; Qiao, Jie; Yan, Liying

    2017-06-10

    More than 7000 single gene diseases have been identified and most of them lack effective treatment. As an early form of prenatal diagnosis, preimplantation genetic diagnosis (PGD) is a combination of in vitro fertilization and genetic diagnosis. PGD has been applied in clinics for more than 20 years to avoid the transmission of genetic defects through analysis of embryos at early stages of development. In this paper, a review for the recent advances in PGD for single gene diseases is provided.

  1. Symptom Clusters in Advanced Cancer Patients: An Empirical Comparison of Statistical Methods and the Impact on Quality of Life.

    Science.gov (United States)

    Dong, Skye T; Costa, Daniel S J; Butow, Phyllis N; Lovell, Melanie R; Agar, Meera; Velikova, Galina; Teckle, Paulos; Tong, Allison; Tebbutt, Niall C; Clarke, Stephen J; van der Hoek, Kim; King, Madeleine T; Fayers, Peter M

    2016-01-01

    Symptom clusters in advanced cancer can influence patient outcomes. There is large heterogeneity in the methods used to identify symptom clusters. To investigate the consistency of symptom cluster composition in advanced cancer patients using different statistical methodologies for all patients across five primary cancer sites, and to examine which clusters predict functional status, a global assessment of health and global quality of life. Principal component analysis and exploratory factor analysis (with different rotation and factor selection methods) and hierarchical cluster analysis (with different linkage and similarity measures) were used on a data set of 1562 advanced cancer patients who completed the European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire-Core 30. Four clusters consistently formed for many of the methods and cancer sites: tense-worry-irritable-depressed (emotional cluster), fatigue-pain, nausea-vomiting, and concentration-memory (cognitive cluster). The emotional cluster was a stronger predictor of overall quality of life than the other clusters. Fatigue-pain was a stronger predictor of overall health than the other clusters. The cognitive cluster and fatigue-pain predicted physical functioning, role functioning, and social functioning. The four identified symptom clusters were consistent across statistical methods and cancer types, although there were some noteworthy differences. Statistical derivation of symptom clusters is in need of greater methodological guidance. A psychosocial pathway in the management of symptom clusters may improve quality of life. Biological mechanisms underpinning symptom clusters need to be delineated by future research. A framework for evidence-based screening, assessment, treatment, and follow-up of symptom clusters in advanced cancer is essential. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  2. Impact of advanced and basic carbohydrate counting methods on metabolic control in patients with type 1 diabetes.

    Science.gov (United States)

    Souto, Débora Lopes; Zajdenverg, Lenita; Rodacki, Melanie; Rosado, Eliane Lopes

    2014-03-01

    Diets based on carbohydrate counting remain a key strategy for improving glycemic control in patients with type 1 diabetes. However, these diets may promote weight gain because of the flexibility in food choices. The aim of this study was to compare carbohydrate counting methods regarding anthropometric, biochemical, and dietary variables in individuals with type 1 diabetes, as well as to evaluate their knowledge about nutrition. Participants were allocated in basic or advanced groups. After 3 mo of the nutritional counseling, dietary intake, anthropometric variables, lipemia, and glycemic control were compared between groups. A questionnaire regarding carbohydrate counting, sucrose intake, nutritional knowledge, and diabetes and nutrition taboos also was administered. Ten (30%) participants had already used advanced carbohydrate counting before the nutritional counseling and these individuals had a higher body mass index (BMI) (P 1) and waist circumference (WC) (P = 0.01) than others (n = 23; 69.7%). After 3 mo of follow-up, although participants in the advanced group (n = 17; 51.52%) presented higher BMI (P 1) and WC (P = 0.03), those in the basic group (n = 16; 48.48%) showed a higher fat intake (P 1). The majority of participants reported no difficulty in following carbohydrate counting (62.5% and 88% for basic and advanced groups, respectively) and a greater flexibility in terms of food choices (>90% with both methods). Advanced carbohydrate counting did not affect lipemic and glycemic control in individuals with type 1 diabetes, however, it may increase food intake, and consequently the BMI and WC, when compared to basic carbohydrate counting. Furthermore, carbohydrate counting promoted greater food flexibility. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. A Model For Teaching Advanced Neuroscience Methods: A Student-Run Seminar to Increase Practical Understanding and Confidence.

    Science.gov (United States)

    Harrison, Theresa M; Ching, Christopher R K; Andrews, Anne M

    2016-01-01

    Neuroscience doctoral students must master specific laboratory techniques and approaches to complete their thesis work (hands-on learning). Due to the highly interdisciplinary nature of the field, learning about a diverse range of methodologies through literature surveys and coursework is also necessary for student success (hands-off learning). Traditional neuroscience coursework stresses what is known about the nervous system with relatively little emphasis on the details of the methods used to obtain this knowledge. Furthermore, hands-off learning is made difficult by a lack of detail in methods sections of primary articles, subfield-specific jargon and vague experimental rationales. We designed a student-taught course to enable first-year neuroscience doctoral students to overcome difficulties in hands-off learning by introducing a new approach to reading and presenting primary research articles that focuses on methodology. In our literature-based course students were encouraged to present a method with which they had no previous experience. To facilitate weekly discussions, "experts" were invited to class sessions. Experts were advanced graduate students who had hands-on experience with the method being covered and served as discussion co-leaders. Self-evaluation worksheets were administered on the first and last days of the 10-week course and used to assess students' confidence in discussing research and methods outside of their primary research expertise. These evaluations revealed that the course significantly increased the students' confidence in reading, presenting and discussing a wide range of advanced neuroscience methods.

  4. Development of advanced methods for signal processing in the monitoring of sodium-cooled reactors

    International Nuclear Information System (INIS)

    Schleisiek, K.; Aberle, J.; Massier, H.; Scherer, K.P.; Vaeth, W.; Leder, H.J.; Schade, H.J.

    1987-01-01

    Selected examples (acoustic boiling detection, pattern recognition method, identification of fuel element vibrations, diagnosis system for KNK II) are used to demonstrate the benefits of up-to-date information technology in the monitoring of nuclear facilities. The methods used range from intelligent frequency analysis to AI methods like pattern recognition and expert systems. (DG) [de

  5. Advance in study of intelligent diagnostic method for nuclear power plant

    International Nuclear Information System (INIS)

    Zhou Gang; Yang Li

    2008-01-01

    The advance of research on the application of three types of intelligent diagnostic approach based on neural network (ANN), fuzzy logic and expert system to the operation status monitoring and fault diagnosis of nuclear power plant (NPP) was reviewed. The research status and characters on status monitoring and fault diagnosis approaches based on neural network, fuzzy logic and expert system for nuclear power plant were analyzed. The development trend of applied research on intelligent diagnostic approaches for nuclear power plant was explored. The analysis results show that the research achievements on intelligent diagnostic approaches based on fuzzy logic and expert system for nuclear power plant are not much relatively. The research of intelligent diagnostic approaches for nuclear power plant concentrate on the aspect of operation status monitoring and fault diagnosis based on neural networks for nuclear power plant. The advancing tendency of intelligent diagnostic approaches for nuclear power plant is the combination of various intelligent diagnostic approaches, the combination of neural network diagnostic approaches and other diagnostic approaches as well as multiple neural network diagnostic approaches. (authors)

  6. Intercomparison of analysis methods for seismically isolated nuclear structures. Part 1: Advanced test data and numerical methods. Working material

    International Nuclear Information System (INIS)

    1993-01-01

    The purpose of the meeting was to review proposed contributions from CRP participating organizations to discuss in detail the experimental data on seismic isolators, to review the numerical methods for the analysis of the seismic isolators, and to perform a first comparison of the calculation results. The aim of the CRP was to validate the reliable numerical methods used for both detailed evaluation of dynamic behaviour of isolation devices and isolated nuclear structures of different nuclear power plant types. The full maturity of seismic isolation for nuclear applications was stressed, as well as the excellent behaviour of isolated structures during the recent earthquakes in Japan and the USA. Participants from Italy, USA, Japan, Russian federation, Republic of Korea, United Kingdom, India and European Commission have presented overview papers on the present programs and their status of contribution to the CRP

  7. Intercomparison of analysis methods for seismically isolated nuclear structures. Part 1: Advanced test data and numerical methods. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The purpose of the meeting was to review proposed contributions from CRP participating organizations to discuss in detail the experimental data on seismic isolators, to review the numerical methods for the analysis of the seismic isolators, and to perform a first comparison of the calculation results. The aim of the CRP was to validate the reliable numerical methods used for both detailed evaluation of dynamic behaviour of isolation devices and isolated nuclear structures of different nuclear power plant types. The full maturity of seismic isolation for nuclear applications was stressed, as well as the excellent behaviour of isolated structures during the recent earthquakes in Japan and the USA. Participants from Italy, USA, Japan, Russian federation, Republic of Korea, United Kingdom, India and European Commission have presented overview papers on the present programs and their status of contribution to the CRP.

  8. Advanced resonance self-shielding method for gray resonance treatment in lattice physics code GALAXY

    International Nuclear Information System (INIS)

    Koike, Hiroki; Yamaji, Kazuya; Kirimura, Kazuki; Sato, Daisuke; Matsumoto, Hideki; Yamamoto, Akio

    2012-01-01

    A new resonance self-shielding method based on the equivalence theory is developed for general application to the lattice physics calculations. The present scope includes commercial light water reactor (LWR) design applications which require both calculation accuracy and calculation speed. In order to develop the new method, all the calculation processes from cross-section library preparation to effective cross-section generation are reviewed and reframed by adopting the current enhanced methodologies for lattice calculations. The new method is composed of the following four key methods: (1) cross-section library generation method with a polynomial hyperbolic tangent formulation, (2) resonance self-shielding method based on the multi-term rational approximation for general lattice geometry and gray resonance absorbers, (3) spatially dependent gray resonance self-shielding method for generation of intra-pellet power profile and (4) integrated reaction rate preservation method between the multi-group and the ultra-fine-group calculations. From the various verifications and validations, applicability of the present resonance treatment is totally confirmed. As a result, the new resonance self-shielding method is established, not only by extension of a past concentrated effort in the reactor physics research field, but also by unification of newly developed unique and challenging techniques for practical application to the lattice physics calculations. (author)

  9. Development of task analysis method for operator tasks in main control room of an advanced nuclear power plant

    International Nuclear Information System (INIS)

    Lin Chiuhsiangloe; Hsieh Tsungling

    2016-01-01

    Task analysis methods provide an insight for quantitative and qualitative predictions of how people will use a proposed system, though the different versions have different emphases. Most of the methods can attest to the coverage of the functionality of a system and all provide estimates of task performance time. However, most of the tasks that operators deal with in a digital work environment in the main control room of an advanced nuclear power plant require high mental activity. Such mental tasks overlap and must be dealt with at the same time; most of them can be assumed to be highly parallel in nature. Therefore, the primary aim to be addressed in this paper was to develop a method that adopts CPM-GOMS (cognitive perceptual motor-goals operators methods selection rules) as the basic pattern of mental task analysis for the advanced main control room. A within-subjects experiment design was used to examine the validity of the modified CPM-GOMS. Thirty participants participated in two task types, which included high- and low-compatibility types. The results indicated that the performance was significantly higher on the high-compatibility task type than on the low-compatibility task type; that is, the modified CPM-GOMS could distinguish the difference between high- and low-compatibility mental tasks. (author)

  10. Testing isotopic labeling with [¹³C₆]glucose as a method of advanced glycation sites identification.

    Science.gov (United States)

    Kielmas, Martyna; Kijewska, Monika; Stefanowicz, Piotr; Szewczuk, Zbigniew

    2012-12-01

    The Maillard reaction occurring between reducing sugars and reactive amino groups of biomolecules leads to the formation of a heterogeneous mixture of compounds: early, intermediate, and advanced glycation end products (AGEs). These compounds could be markers of certain diseases and of the premature aging process. Detection of Amadori products can be performed by various methods, including MS/MS techniques and affinity chromatography on immobilized boronic acid. However, the diversity of the structures of AGEs makes detection of these compounds more difficult. The aim of this study was to test a new method of AGE identification based on isotope (13)C labeling. The model protein (hen egg lysozyme) was modified with an equimolar mixture of [(12)C(6)]glucose and [(13)C(6)]glucose and then subjected to reduction of the disulfide bridges followed by tryptic hydrolysis. The digest obtained was analyzed by LC-MS. The glycation products were identified on the basis of characteristic isotopic patterns resulting from the use of isotopically labeled glucose. This method allowed identification of 38 early Maillard reaction products and five different structures of the end glycation products. This isotopic labeling technique combined with LC-MS is a sensitive method for identification of advanced glycation end products even if their chemical structure is unknown. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Data Mining and Statistics Methods for Advanced Training Course Quality Measurement: Case Study

    Directory of Open Access Journals (Sweden)

    Galchenko Maxim

    2014-12-01

    Full Text Available Advanced training courses in the energetics field is a very important part of human reliability growth. In the words of S.E. Magid, chief of Technical Educational systems in Energy Technologies, UNESCO department: “The number of forced outages due to failures of equipment on the power stations -30 %. The share of operational personnel fault in these infringements makes considerable size (to 15%. As a whole in the Russian Open Society ‘United Power Systems’ the infringements percentage because of the personnel from infringements total makes 2%. At the same time, on power stations this quantity makes 18%. In power supply systems of Siberia the relative quantity of infringements because of the personnel reaches 50%.” [1].

  12. Advances in intelligent process-aware information systems concepts, methods, and technologies

    CERN Document Server

    Oberhauser, Roy; Reichert, Manfred

    2017-01-01

    This book provides a state-of-the-art perspective on intelligent process-aware information systems and presents chapters on specific facets and approaches applicable to such systems. Further, it highlights novel advances and developments in various aspects of intelligent process-aware information systems and business process management systems. Intelligence capabilities are increasingly being integrated into or created in many of today’s software products and services. Process-aware information systems provide critical computing infrastructure to support the various processes involved in the creation and delivery of business products and services. Yet the integration of intelligence capabilities into process-aware information systems is a non-trivial yet necessary evolution of these complex systems. The book’s individual chapters address adaptive process management, case management processes, autonomically-capable processes, process-oriented information logistics, process recommendations, reasoning over ...

  13. Recent Advances in Targeted and Untargeted Metabolomics by NMR and MS/NMR Methods

    Energy Technology Data Exchange (ETDEWEB)

    Bingol, Kerem

    2018-04-18

    Metabolomics has made significant progress in multiple fronts in the last 18 months. This minireview aimed to give an overview of these advancements in the light of their contribution to targeted and untargeted metabolomics. New computational approaches have emerged to overcome manual absolute quantitation step of metabolites in 1D 1H NMR spectra. This provides more consistency between inter-laboratory comparisons. Integration of 2D NMR metabolomics databases under a unified web server allowed very accurate identification of the metabolites that have been catalogued in these databases. For the remaining uncatalogued and unknown metabolites, new cheminformatics approaches have been developed by combining NMR and mass spectrometry. These hybrid NMR/MS approaches accelerated the identification of unknowns in untargeted studies, and now they are allowing to profile ever larger number of metabolites in application studies.

  14. Advanced surrogate model and sensitivity analysis methods for sodium fast reactor accident assessment

    International Nuclear Information System (INIS)

    Marrel, A.; Marie, N.; De Lozzo, M.

    2015-01-01

    Within the framework of the generation IV Sodium Fast Reactors, the safety in case of severe accidents is assessed. From this statement, CEA has developed a new physical tool to model the accident initiated by the Total Instantaneous Blockage (TIB) of a sub-assembly. This TIB simulator depends on many uncertain input parameters. This paper aims at proposing a global methodology combining several advanced statistical techniques in order to perform a global sensitivity analysis of this TIB simulator. The objective is to identify the most influential uncertain inputs for the various TIB outputs involved in the safety analysis. The proposed statistical methodology combining several advanced statistical techniques enables to take into account the constraints on the TIB simulator outputs (positivity constraints) and to deal simultaneously with various outputs. To do this, a space-filling design is used and the corresponding TIB model simulations are performed. Based on this learning sample, an efficient constrained Gaussian process metamodel is fitted on each TIB model outputs. Then, using the metamodels, classical sensitivity analyses are made for each TIB output. Multivariate global sensitivity analyses based on aggregated indices are also performed, providing additional valuable information. Main conclusions on the influence of each uncertain input are derived. - Highlights: • Physical-statistical tool for Sodium Fast Reactors TIB accident. • 27 uncertain parameters (core state, lack of physical knowledge) are highlighted. • Constrained Gaussian process efficiently predicts TIB outputs (safety criteria). • Multivariate sensitivity analyses reveal that three inputs are mainly influential. • The type of corium propagation (thermal or hydrodynamic) is the most influential

  15. 'Boomerang' technique: an improved method for conformal treatment of locally advanced nasopharyngeal cancer

    International Nuclear Information System (INIS)

    Corry, June; D'Costa, Leta; Porceddu, Sandro; Peters, Lester J.; Hornby, Colin; Fisher, Richard; Rischin, Danny

    2004-01-01

    The primary aim of the present study was to assess radiation dosimetry and subsequent clinical outcomes in patient: with locally advanced nasopharyngeal cancer using a novel radiation technique termed the 'Boomerang'. Dosimetric comparisons were made with both conventional and intensity modulated radiation therapy (IMRT) techniques. Thi; is a study of 22 patients treated with this technique from June 1995 to October 1998. The technique used entailec delivery of 36 Gy in 18 fractions via parallel opposed fields, then 24 Gy in 12 fractions via asymmetric rotating arc field' for a total of 60 Gy in 30 fractions. Patients also received induction and concurrent chemotherapy. The radiation dosimetry was excellent. Dose-volume histograms showed that with the arc fields, 90% of the planning target volume received 94% of the prescribed dose. Relative to other conventional radiation therapy off-cord techniques, the Boomerang technique results in a 27% greater proportion of the prescribed dose being received by 90% of the planning target volume. This translates into an overall 10% greater dose received for the same prescribed dose. At 3 years, the actuarial loco-regional control rate, the failure-free survival rate and the overall survival rate were 91, 75 and 91%, respectively. At 5 years, the actuarial loco-regional control rate, the failure-free survival rate and the overall survival rate were 74, 62 and 71%, respectively. The Boomerang technique provided excellent radiation dosimetry with correspondingly good loco-regional control rates (in conjunction with chemotherapy) and very acceptable acute and late toxicity profiles. Because treatment can be delivered with conventional standard treatment planning and delivery systems, it is a validated treatment option for centres that do not have the capability or capacity for IMRT. A derivative of the Boomerang technique, excluding the parallel opposed component, is now our standard for patients with locally advanced

  16. 'Boomerang' technique: an improved method for conformal treatment of locally advanced nasopharyngeal cancer.

    Science.gov (United States)

    Corry, June; Hornby, Colin; Fisher, Richard; D'Costa, Ieta; Porceddu, Sandro; Rischin, Danny; Peters, Lester J

    2004-06-01

    The primary aim of the present study was to assess radiation dosimetry and subsequent clinical outcomes in patients with locally advanced nasopharyngeal cancer using a novel radiation technique termed the 'Boomerang'. Dosimetric comparisons were made with both conventional and intensity modulated radiation therapy (IMRT) techniques. This is a study of 22 patients treated with this technique from June 1995 to October 1998. The technique used entailed delivery of 36 Gy in 18 fractions via parallel opposed fields, then 24 Gy in 12 fractions via asymmetric rotating arc fields for a total of 60 Gy in 30 fractions. Patients also received induction and concurrent chemotherapy. The radiation dosimetry was excellent. Dose-volume histograms showed that with the arc fields, 90% of the planning target volume received 94% of the prescribed dose. Relative to other conventional radiation therapy off-cord techniques, the Boomerang technique results in a 27% greater proportion of the prescribed dose being received by 90% of the planning target volume. This translates into an overall 10% greater dose received for the same prescribed dose. At 3 years, the actuarial loco-regional control rate, the failure-free survival rate and the overall survival rate were 91, 75 and 91%, respectively. At 5 years, the actuarial loco-regional control rate, the failure-free survival rate and the overall survival rate were 74, 62 and 71%, respectively. The Boomerang technique provided excellent radiation dosimetry with correspondingly good loco-regional control rates (in conjunction with chemotherapy) and very acceptable acute and late toxicity profiles. Because treatment can be delivered with conventional standard treatment planning and delivery systems, it is a validated treatment option for centres that do not have the capability or capacity for IMRT. A derivative of the Boomerang technique, excluding the parallel opposed component, is now our standard for patients with locally advanced

  17. Evaluation of an advanced physical diagnosis course using consumer preferences methods: the nominal group technique.

    Science.gov (United States)

    Coker, Joshua; Castiglioni, Analia; Kraemer, Ryan R; Massie, F Stanford; Morris, Jason L; Rodriguez, Martin; Russell, Stephen W; Shaneyfelt, Terrance; Willett, Lisa L; Estrada, Carlos A

    2014-03-01

    Current evaluation tools of medical school courses are limited by the scope of questions asked and may not fully engage the student to think on areas to improve. The authors sought to explore whether a technique to study consumer preferences would elicit specific and prioritized information for course evaluation from medical students. Using the nominal group technique (4 sessions), 12 senior medical students prioritized and weighed expectations and topics learned in a 100-hour advanced physical diagnosis course (4-week course; February 2012). Students weighted their top 3 responses (top = 3, middle = 2 and bottom = 1). Before the course, 12 students identified 23 topics they expected to learn; the top 3 were review sensitivity/specificity and high-yield techniques (percentage of total weight, 18.5%), improving diagnosis (13.8%) and reinforce usual and less well-known techniques (13.8%). After the course, students generated 22 topics learned; the top 3 were practice and reinforce advanced maneuvers (25.4%), gaining confidence (22.5%) and learn the evidence (16.9%). The authors observed no differences in the priority of responses before and after the course (P = 0.07). In a physical diagnosis course, medical students elicited specific and prioritized information using the nominal group technique. The course met student expectations regarding education of the evidence-based physical examination, building skills and confidence on the proper techniques and maneuvers and experiential learning. The novel use for curriculum evaluation may be used to evaluate other courses-especially comprehensive and multicomponent courses.

  18. A review of the physics methods for advanced gas-cooled reactors

    International Nuclear Information System (INIS)

    Buckler, A.N.

    1982-01-01

    A review is given of steady-state reactor physics methods and associated codes used in AGR design and operation. These range from the basic lattice codes (ARGOSY, WIMS), through homogeneous-diffusion theory fuel management codes (ODYSSEUS, MOPSY) to a fully heterogeneous code (HET). The current state of development of the methods is discussed, together with illustrative examples of their application. (author)

  19. Advances in the Use of Neuroscience Methods in Research on Learning and Instruction

    Science.gov (United States)

    De Smedt, Bert

    2014-01-01

    Cognitive neuroscience offers a series of tools and methodologies that allow researchers in the field of learning and instruction to complement and extend the knowledge they have accumulated through decades of behavioral research. The appropriateness of these methods depends on the research question at hand. Cognitive neuroscience methods allow…

  20. Thermal load forecasting in district heating networks using deep learning and advanced feature selection methods

    NARCIS (Netherlands)

    Suryanarayana, Gowri; Lago Garcia, J.; Geysen, Davy; Aleksiejuk, Piotr; Johansson, Christian

    2018-01-01

    Recent research has seen several forecasting methods being applied for heat load forecasting of district heating networks. This paper presents two methods that gain significant improvements compared to the previous works. First, an automated way of handling non-linear dependencies in linear

  1. Variational methods in the kinetic modeling of nuclear reactors: Recent advances

    International Nuclear Information System (INIS)

    Dulla, S.; Picca, P.; Ravetto, P.

    2009-01-01

    The variational approach can be very useful in the study of approximate methods, giving a sound mathematical background to numerical algorithms and computational techniques. The variational approach has been applied to nuclear reactor kinetic equations, to obtain a formulation of standard methods such as point kinetics and quasi-statics. more recently, the multipoint method has also been proposed for the efficient simulation of space-energy transients in nuclear reactors and in source-driven subcritical systems. The method is now founded on a variational basis that allows a consistent definition of integral parameters. The mathematical structure of multipoint and modal methods is also investigated, evidencing merits and shortcomings of both techniques. Some numerical results for simple systems are presented and the errors with respect to reference calculations are reported and discussed. (authors)

  2. Advanced numerical methods for three dimensional two-phase flow calculations in PWR

    International Nuclear Information System (INIS)

    Toumi, I.; Gallo, D.; Royer, E.

    1997-01-01

    This paper is devoted to new numerical methods developed for three dimensional two-phase flow calculations. These methods are finite volume numerical methods. They are based on an extension of Roe's approximate Riemann solver to define convective fluxes versus mean cell quantities. To go forward in time, a linearized conservative implicit integrating step is used, together with a Newton iterative method. We also present here some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. This kind of numerical method, which is widely used for fluid dynamic calculations, is proved to be very efficient for the numerical solution to two-phase flow problems. This numerical method has been implemented for the three dimensional thermal-hydraulic code FLICA-4 which is mainly dedicated to core thermal-hydraulic transient and steady-state analysis. Hereafter, we will also find some results obtained for the EPR reactor running in a steady-state at 60% of nominal power with 3 pumps out of 4, and a thermal-hydraulic core analysis for a 1300 MW PWR at low flow steam-line-break conditions. (author)

  3. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  4. Advances in the application of genetic manipulation methods to apicomplexan parasites.

    Science.gov (United States)

    Suarez, C E; Bishop, R P; Alzan, H F; Poole, W A; Cooke, B M

    2017-10-01

    Apicomplexan parasites such as Babesia, Theileria, Eimeria, Cryptosporidium and Toxoplasma greatly impact animal health globally, and improved, cost-effective measures to control them are urgently required. These parasites have complex multi-stage life cycles including obligate intracellular stages. Major gaps in our understanding of the biology of these relatively poorly characterised parasites and the diseases they cause severely limit options for designing novel control methods. Here we review potentially important shared aspects of the biology of these parasites, such as cell invasion, host cell modification, and asexual and sexual reproduction, and explore the potential of the application of relatively well-established or newly emerging genetic manipulation methods, such as classical transfection or gene editing, respectively, for closing important gaps in our knowledge of the function of specific genes and proteins, and the biology of these parasites. In addition, genetic manipulation methods impact the development of novel methods of control of the diseases caused by these economically important parasites. Transient and stable transfection methods, in conjunction with whole and deep genome sequencing, were initially instrumental in improving our understanding of the molecular biology of apicomplexan parasites and paved the way for the application of the more recently developed gene editing methods. The increasingly efficient and more recently developed gene editing methods, in particular those based on the CRISPR/Cas9 system and previous conceptually similar techniques, are already contributing to additional gene function discovery using reverse genetics and related approaches. However, gene editing methods are only possible due to the increasing availability of in vitro culture, transfection, and genome sequencing and analysis techniques. We envisage that rapid progress in the development of novel gene editing techniques applied to apicomplexan parasites of

  5. Advanced numerical methods for three dimensional two-phase flow calculations

    Energy Technology Data Exchange (ETDEWEB)

    Toumi, I. [Laboratoire d`Etudes Thermiques des Reacteurs, Gif sur Yvette (France); Caruge, D. [Institut de Protection et de Surete Nucleaire, Fontenay aux Roses (France)

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.

  6. Advanced numerical methods for three dimensional two-phase flow calculations

    International Nuclear Information System (INIS)

    Toumi, I.; Caruge, D.

    1997-01-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe's method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations

  7. [The optimization of restoration approaches of advanced hand activity using the sensorial glove and the mCIMT method].

    Science.gov (United States)

    Mozheiko, E Yu; Prokopenko, S V; Alekseevich, G V

    To reason the choice of methods of restoration of advanced hand activity depending on severity of motor disturbance in the top extremity. Eighty-eight patients were randomized into 3 groups: 1) the mCIMT group, 2) the 'touch glove' group, 3) the control group. For assessment of physical activity of the top extremity Fugl-Meyer Assessment Upper Extremity, Nine-Hole Peg Test, Motor Assessment Scale were used. Assessment of non-use phenomenon was carried out with the Motor Activity Log scale. At a stage of severe motor dysfunction, there was a restoration of proximal departments of a hand in all groups, neither method was superior to the other. In case of moderate severity of motor deficiency of the upper extremity the most effective was the method based on the principle of biological feedback - 'a touch glove'. In the group with mild severity of motor dysfunction, the best recovery was achieved in the mCIMT group.

  8. Application of the characteristics method combined with advanced self-shielding models to an ACR-type cell

    International Nuclear Information System (INIS)

    Le Tellier, R.; Hebert, A.

    2005-01-01

    In this paper, we present the usage of the method of characteristics (MOC) with advanced self-shielding models for a fundamental lattice calculation on an ACR-type cell i.e. a cluster geometry with light water coolant and heavy water moderator. Comparison with the collision probability method (CP) show the consistency of the method of characteristics as implemented both in flux and self-shielding calculations. Acceleration techniques are tested in the different calculations and prove to be efficient. Comparisons with the Monte-Carlo code Tripoli4 show the advantage of a subgroup approach for self-shielding calculations : the difference in k eff is less than one standard deviation of the Tripoli4 calculation and in terms of total absorption rates, in the resolved resonances group, the maximum relative error is of the order of 3% localised in the most outer region of the central pin. (author)

  9. Development of advanced methods and related software for human reliability evaluation within probabilistic safety analyses

    International Nuclear Information System (INIS)

    Kosmowski, K.T.; Mertens, J.; Degen, G.; Reer, B.

    1994-06-01

    Human Reliability Analysis (HRA) is an important part of Probabilistic Safety Analysis (PSA). The first part of this report consists of an overview of types of human behaviour and human error including the effect of significant performance shaping factors on human reliability. Particularly with regard to safety assessments for nuclear power plants a lot of HRA methods have been developed. The most important of these methods are presented and discussed in the report, together with techniques for incorporating HRA into PSA and with models of operator cognitive behaviour. Based on existing HRA methods the concept of a software system is described. For the development of this system the utilization of modern programming tools is proposed; the essential goal is the effective application of HRA methods. A possible integration of computeraided HRA within PSA is discussed. The features of Expert System Technology and examples of applications (PSA, HRA) are presented in four appendices. (orig.) [de

  10. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. 1st Advanced School on Exoplanetary Science : Methods of Detecting Exoplanets

    CERN Document Server

    Mancini, Luigi; Sozzetti, Alessandro

    2016-01-01

    In this book, renowned scientists describe the various techniques used to detect and characterize extrasolar planets, or exoplanets, with a view to unveiling the “tricks of the trade” of planet detection to a wider community. The radial velocity method, transit method, microlensing method, and direct imaging method are all clearly explained, drawing attention to their advantages and limitations and highlighting the complementary roles that they can play in improving the characterization of exoplanets’ physical and orbital properties. By probing the planetary frequency at different distances and in different conditions, these techniques are helping astrophysicists to reconstruct the scenarios of planetary formation and to give robust scientific answers to questions regarding the frequency of potentially habitable worlds. Twenty years have passed since the discovery of a Jupiter-mass companion to a main sequence star other than the Sun, heralding the birth of extrasolar planetary research; this book fully...

  12. Effects of advanced selection methods on sperm quality and ART outcome : a systematic review

    NARCIS (Netherlands)

    Said, Tamer M.; Land, Jolande A.

    2011-01-01

    BACKGROUND: Current routine semen preparation techniques do not inclusively target all intrinsic sperm characteristics that may impact the fertilization potential. In order to address these characteristics, several methods have been recently developed and applied to sperm selection. The objective of

  13. The advanced CECE process for enriching tritium by the chemical exchange method with a hydrophobic catalyst

    International Nuclear Information System (INIS)

    Kitamoto, Asashi; Shimizu, Masami; Masui, Takashi.

    1992-01-01

    The monothermal chemical exchange process with electrolysis, i.e., CECE process, was an effective method for enriching and removing tritium from tritiated water with low to middle level activity. The purpose of this study is to propose the theoretical background of the two-parameter evaluation method, which is based on a two-step isotope exchange reaction between hydrogen gas and liquid water, for improvement of the performance of a hydrophobic catalyst by a trickle bed-type column. Finally, a two-parameter method could attain the highest performance of isotope separation and the lowest liquid holdup for a trickle bed-type column. Therefore, this method will present some effective and practical procedures in scaling up a tritium enrichment process. The main aspect of the CECE process in engineering design and system evaluation was to develop the isotope exchange column with a high performance catalyst. (author)

  14. Advanced methods for light trapping in optically thin silicon solar cells

    Science.gov (United States)

    Nagel, James Richard

    2011-12-01

    The field of light trapping is the study of how best to absorb light in a thin film of material when most light either reflects away at the surface or transmits straight through to the other side. This has tremendous application to the field of photovoltaics where thin silicon films can be manufactured cheaply, but also fail to capture all of the available photons in the solar spectrum. Advancements in light trapping therefore bring us closer to the day when photovoltaic devices may reach grid parity with traditional fossil fuels on the electrical energy market. This dissertation advances our understanding of light trapping by first modeling the effects of loss in planar dielectric waveguides. The mathematical framework developed here can be used to model any arbitrary three-layer structure with mixed gain or loss and then extract the total field solution for the guided modes. It is found that lossy waveguides possess a greater number of eigenmodes than their lossless counterparts, and that these "loss guided" modes attenuate much more rapidly than conventional modes. Another contribution from this dissertation is the exploration of light trapping through the use of dielectric nanospheres embedded directly within the active layer of a thin silicon film. The primary benefit to this approach is that the device can utilize a surface nitride layer serving as an antireflective coating while still retaining the benefits of light trapping within the film. The end result is that light trapping and light injection are effectively decoupled from each other and may be independently optimized within a single photovoltaic device. The final contribution from this work is a direct numerical comparison between multiple light trapping schemes. This allows us to quantify the relative performances of various design techniques against one another and objectively determine which ideas tend to capture the most light. Using numerical simulation, this work directly compares the absorption

  15. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    Science.gov (United States)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  16. Recent advances in conventional and contemporary methods for remediation of heavy metal-contaminated soils.

    Science.gov (United States)

    Sharma, Swati; Tiwari, Sakshi; Hasan, Abshar; Saxena, Varun; Pandey, Lalit M

    2018-04-01

    Remediation of heavy metal-contaminated soils has been drawing our attention toward it for quite some time now and a need for developing new methods toward reclamation has come up as the need of the hour. Conventional methods of heavy metal-contaminated soil remediation have been in use for decades and have shown great results, but they have their own setbacks. The chemical and physical techniques when used singularly generally generate by-products (toxic sludge or pollutants) and are not cost-effective, while the biological process is very slow and time-consuming. Hence to overcome them, an amalgamation of two or more techniques is being used. In view of the facts, new methods of biosorption, nanoremediation as well as microbial fuel cell techniques have been developed, which utilize the metabolic activities of microorganisms for bioremediation purpose. These are cost-effective and efficient methods of remediation, which are now becoming an integral part of all environmental and bioresource technology. In this contribution, we have highlighted various augmentations in physical, chemical, and biological methods for the remediation of heavy metal-contaminated soils, weighing up their pros and cons. Further, we have discussed the amalgamation of the above techniques such as physiochemical and physiobiological methods with recent literature for the removal of heavy metals from the contaminated soils. These combinations have showed synergetic effects with a many fold increase in removal efficiency of heavy metals along with economic feasibility.

  17. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  18. Method for the determination of the equation of state of advanced fuels based on the properties of normal fluids

    International Nuclear Information System (INIS)

    Hecht, M.J.; Catton, I.; Kastenberg, W.E.

    1976-12-01

    An equation of state based on the properties of normal fluids, the law of rectilinear averages, and the second law of thermodynamics can be derived for advanced LMFBR fuels on the basis of the vapor pressure, enthalpy of vaporization, change in heat capacity upon vaporization, and liquid density at the melting point. The method consists of estimating an equation of state by means of the law of rectilinear averages and the second law of thermodynamics, integrating by means of the second law until an instability is reached, and then extrapolating by means of a self-consistent estimation of the enthalpy of vaporization

  19. Mass Spectrometry-Based Methods for Identifying Oxidized Proteins in Disease: Advances and Challenges

    Directory of Open Access Journals (Sweden)

    Ivan Verrastro

    2015-04-01

    Full Text Available Many inflammatory diseases have an oxidative aetiology, which leads to oxidative damage to biomolecules, including proteins. It is now increasingly recognized that oxidative post-translational modifications (oxPTMs of proteins affect cell signalling and behaviour, and can contribute to pathology. Moreover, oxidized proteins have potential as biomarkers for inflammatory diseases. Although many assays for generic protein oxidation and breakdown products of protein oxidation are available, only advanced tandem mass spectrometry approaches have the power to localize specific oxPTMs in identified proteins. While much work has been carried out using untargeted or discovery mass spectrometry approaches, identification of oxPTMs in disease has benefitted from the development of sophisticated targeted or semi-targeted scanning routines, combined with chemical labeling and enrichment approaches. Nevertheless, many potential pitfalls exist which can result in incorrect identifications. This review explains the limitations, advantages and challenges of all of these approaches to detecting oxidatively modified proteins, and provides an update on recent literature in which they have been used to detect and quantify protein oxidation in disease.

  20. Impact of advanced BWR core physics method on BWR core monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Moon, H; Wells, A [Siemens Power Corporation, Richland (United States)

    2000-07-01

    Siemens Power Corporation recently initiated development of POWERPLEX{sup TM}-III for delivery to the Grand Gulf Nuclear Power Station. The main change introduced in POWERPLEX{sup TM}-III as compared to its predecessor POWERPLEX{sup TM}-II is the incorporation of the advances BWR core simulator MICROBURN-B2. A number of issues were identified and evaluated relating to the implementation of MICROBURN-B2 and its impact on core monitoring. MICROBURN-B2 demands about three to five times more memory and two to three times more computing time than its predecessor MICROBURN-B in POWERPLEX {sup TM}-II. POWERPLEX{sup TM}-III will improve thermal margin prediction accuracy and provide more accurate plant operating conditions to operators than POWERPLEX{sup TM}-II due to its improved accuracy in predicted TIP values and critical k-effective. The most significant advantage of POWERPLEX{sup TM}-III is its capability to monitor a relaxed rod sequence exchange operation. (authors)

  1. Advancing tuberculosis drug regimen development through innovative quantitative translational pharmacology methods and approaches.

    Science.gov (United States)

    Hanna, Debra; Romero, Klaus; Schito, Marco

    2017-03-01

    The development of novel tuberculosis (TB) multi-drug regimens that are more efficacious and of shorter duration requires a robust drug development pipeline. Advances in quantitative modeling and simulation can be used to maximize the utility of patient-level data from prior and contemporary clinical trials, thus optimizing study design for anti-TB regimens. This perspective article highlights the work of seven project teams developing first-in-class translational and quantitative methodologies that aim to inform drug development decision-making, dose selection, trial design, and safety assessments, in order to achieve shorter and safer therapies for patients in need. These tools offer the opportunity to evaluate multiple hypotheses and provide a means to identify, quantify, and understand relevant sources of variability, to optimize translation and clinical trial design. When incorporated into the broader regulatory sciences framework, these efforts have the potential to transform the development paradigm for TB combination development, as well as other areas of global health. Copyright © 2016. Published by Elsevier Ltd.

  2. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  3. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    International Nuclear Information System (INIS)

    Dragt, A.J.; Gluckstern, R.L.

    1992-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides

  4. The Advanced Glaucoma Intervention Study (AGIS): 1. Study design and methods and baseline characteristics of study patients.

    Science.gov (United States)

    Ederer, F; Gaasterland, D E; Sullivan, E K

    1994-08-01

    Medical therapy has been the standard initial treatment for open-angle glaucoma. When some visual field has been lost and maximum tolerated and effective medical therapy does not succeed in controlling the disease, the patient is considered to have advanced glaucoma, and the first of a potential sequence of surgical treatments is usually indicated. Little is known about the long-term course and prognosis of advanced glaucoma or about the long-term effectiveness of sequential surgical treatments in controlling the disease and preventing vision loss and blindness. The Advanced Glaucoma Intervention Study was designed to study, in advanced glaucoma, the long-term clinical course and prognosis, and, in a randomized trial, the comparative outcomes of two sequences of surgical treatments. Toward these goals, 789 eyes in 591 patients were enrolled at 11 clinical centers between 1988 and 1992. Follow-up will continue until 1996. Eyes were randomly assigned to one of two sequences of surgical treatments. One sequence begins with argon laser trabeculoplasty (ALT), is followed by trabeculectomy, an incisional surgical filtering procedure, should ALT fail to control the disease, and by a second trabeculectomy should the first trabeculectomy fail. The other sequence begins with trabeculectomy, is followed by ALT should the trabeculectomy fail, and by a second trabeculectomy should ALT fail. The main outcome of interest is visual function (visual field and visual acuity). Other important outcomes are intraocular pressure, complications of surgery, time to treatment failure, and extent of need for additional medical therapy. We present in this paper the rationale, objectives, design and methods of the study, and the baseline characteristics of study patients and eyes.

  5. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts

    International Nuclear Information System (INIS)

    Berthon, B.; Marshall, C.; Evans, M.; Spezi, E.

    2014-01-01

    Purpose: The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Methods: Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. Results: The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC 0.76 except for tori) but showed the largest errors in the recovery of pears and drops dimensions (higher than 10% and 30% of the true length, respectively). Large errors were visible

  6. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections

    Directory of Open Access Journals (Sweden)

    Stefano Stacul

    2018-02-01

    Full Text Available A Boundary Element Method (BEM approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.

  7. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections.

    Science.gov (United States)

    Stacul, Stefano; Squeglia, Nunziante

    2018-02-15

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.

  8. ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY

    International Nuclear Information System (INIS)

    Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.; Shaver, Mark W.

    2010-01-01

    Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today's confirmatory assay methods. Pacific Northwest National Laboratory's (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241) with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.

  9. A review of recent advances in the spherical harmonics expansion method for semiconductor device simulation.

    Science.gov (United States)

    Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A

    The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.

  10. Recent advances in sample preparation techniques and methods of sulfonamides detection - A review.

    Science.gov (United States)

    Dmitrienko, Stanislava G; Kochuk, Elena V; Apyari, Vladimir V; Tolmacheva, Veronika V; Zolotov, Yury A

    2014-11-19

    Sulfonamides (SAs) have been the most widely used antimicrobial drugs for more than 70 years, and their residues in foodstuffs and environmental samples pose serious health hazards. For this reason, sensitive and specific methods for the quantification of these compounds in numerous matrices have been developed. This review intends to provide an updated overview of the recent trends over the past five years in sample preparation techniques and methods for detecting SAs. Examples of the sample preparation techniques, including liquid-liquid and solid-phase extraction, dispersive liquid-liquid microextraction and QuEChERS, are given. Different methods of detecting the SAs present in food and feed and in environmental, pharmaceutical and biological samples are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Recent Advances in the Korringa-Kohn-Rostoker Green Function Method

    Directory of Open Access Journals (Sweden)

    Zeller Rudolf

    2014-01-01

    Full Text Available The Korringa-Kohn-Rostoker (KKR Green function (GF method is a technique for all-electron full-potential density-functional calculations. Similar to the historical Wigner-Seitz cellular method, the KKR-GF method uses a partitioning of space into atomic Wigner-Seitz cells. However, the numerically demanding wave-function matching at the cell boundaries is avoided by use of an integral equation formalism based on the concept of reference Green functions. The advantage of this formalism will be illustrated by the recent progress made for very large systems with thousands of inequivalent atoms and for very accurate calculations of atomic forces and total energies.

  12. Simple tool for the rapid, automated quantification of glacier advance/retreat observations using multiple methods

    Science.gov (United States)

    Lea, J.

    2017-12-01

    The quantification of glacier change is a key variable within glacier monitoring, with the method used potentially being crucial to ensuring that data can be appropriately compared with environmental data. The topic and timescales of study (e.g. land/marine terminating environments; sub-annual/decadal/centennial/millennial timescales) often mean that different methods are more suitable for different problems. However, depending on the GIS/coding expertise of the user, some methods can potentially be time consuming to undertake, making large-scale studies problematic. In addition, examples exist where different users have nominally applied the same methods in different studies, though with minor methodological inconsistencies in their approach. In turn, this will have implications for data homogeneity where regional/global datasets may be constructed. Here, I present a simple toolbox scripted in a Matlab® environment that requires only glacier margin and glacier centreline data to quantify glacier length, glacier change between observations, rate of change, in addition to other metrics. The toolbox includes the option to apply the established centreline or curvilinear box methods, or a new method: the variable box method - designed for tidewater margins where box width is defined as the total width of the individual terminus observation. The toolbox is extremely flexible, and has the option to be applied as either Matlab® functions within user scripts, or via a graphical user interface (GUI) for those unfamiliar with a coding environment. In both instances, there is potential to apply the methods quickly to large datasets (100s-1000s of glaciers, with potentially similar numbers of observations each), thus ensuring large scale methodological consistency (and therefore data homogeneity) and allowing regional/global scale analyses to be achievable for those with limited GIS/coding experience. The toolbox has been evaluated against idealised scenarios demonstrating

  13. [Research advances in indices and methods for nutritional status evaluation in patients with liver cirrhosis].

    Science.gov (United States)

    Li, H; Zhang, L

    2017-03-20

    In recent years, malnutrition in patients with liver cirrhosis has been taken more and more seriously in clinical physicians, and patients' nutritional status is closely associated with prognosis. At present, there are many methods for the evaluation of nutritional status in patients with liver cirrhosis, but there are still no unified standards. This article reviews the common evaluation indices and methods used in clinical practice in China and foreign countries, in order to provide a basis for accurately evaluating nutritional status and guiding nutritional therapy in patients with liver cirrhosis.

  14. Fabrication of advanced Bragg gratings with complex apodization profiles by use of the polarization control method

    DEFF Research Database (Denmark)

    Deyerl, Hans-Jürgen; Plougmann, Nikolai; Jensen, Jesper Bo Damm

    2004-01-01

    The polarization control method offers a flexible, robust, and low-cost route for the parallel fabrication of gratings with complex apodization profiles including several discrete phase shifts and chirp. The performance of several test gratings is evaluated in terms of their spectral response...... and compared with theoretical predictions. Short gratings with sidelobe-suppression levels in excess of 32 dB and transmission dips lower than 80 dB have been realized. Finally, most of the devices fabricated by the polarization control method show comparable quality to gratings manufactured by far more...

  15. Nanotechnology solutions for Alzheimer's disease: advances in research tools, diagnostic methods and therapeutic agents.

    Science.gov (United States)

    Nazem, Amir; Mansoori, G Ali

    2008-03-01

    A century of research has passed since the discovery and definition of Alzheimer's disease (AD), the primary common dementing disorder worldwide. However, AD lacks definite diagnostic approaches and effective cure at the present. Moreover, the currently available diagnostic tools are not sufficient for an early screening of AD in order to start preventive approaches. Recently the emerging field of nanotechnology has promised new techniques to solve some of the AD challenges. Nanotechnology refers to the techniques of designing and manufacturing nanosize (1-100 nm) structures through controlled positional and/or self-assembly of atoms and molecules. In this report, we present the promises that nanotechnology brings in research on the AD diagnosis and therapy. They include its potential for the better understanding of the AD root cause molecular mechanisms, AD's early diagnoses, and effective treatment. The advances in AD research offered by the atomic force microscopy, single molecule fluorescence microscopy and NanoSIMS microscopy are examined here. In addition, the recently proposed applications of nanotechnology for the early diagnosis of AD including bio-barcode assay, localized surface plasmon resonance nanosensor, quantum dot and nanomechanical cantilever arrays are analyzed. Applications of nanotechnology in AD therapy including neuroprotections against oxidative stress and anti-amyloid therapeutics, neuroregeneration and drug delivery beyond the blood brain barrier (BBB) are discussed and analyzed. All of these applications could improve the treatment approach of AD and other neurodegenerative diseases. The complete cure of AD may become feasible by a combination of nanotechnology and some other novel approaches, like stem cell technology.

  16. Combining advanced networked technology and pedagogical methods to improve collaborative distance learning.

    Science.gov (United States)

    Staccini, Pascal; Dufour, Jean-Charles; Raps, Hervé; Fieschi, Marius

    2005-01-01

    Making educational material be available on a network cannot be reduced to merely implementing hypermedia and interactive resources on a server. A pedagogical schema has to be defined to guide students for learning and to provide teachers with guidelines to prepare valuable and upgradeable resources. Components of a learning environment, as well as interactions between students and other roles such as author, tutor and manager, can be deduced from cognitive foundations of learning, such as the constructivist approach. Scripting the way a student will to navigate among information nodes and interact with tools to build his/her own knowledge can be a good way of deducing the features of the graphic interface related to the management of the objects. We defined a typology of pedagogical resources, their data model and their logic of use. We implemented a generic and web-based authoring and publishing platform (called J@LON for Join And Learn On the Net) within an object-oriented and open-source programming environment (called Zope) embedding a content management system (called Plone). Workflow features have been used to mark the progress of students and to trace the life cycle of resources shared by the teaching staff. The platform integrated advanced on line authoring features to create interactive exercises and support live courses diffusion. The platform engine has been generalized to the whole curriculum of medical studies in our faculty; it also supports an international master of risk management in health care and will be extent to all other continuous training diploma.

  17. Methods and tools for the evaluation of the sensitivity to natural radiations of advanced integrated circuits

    International Nuclear Information System (INIS)

    Peronnard, P.

    2009-10-01

    Atmospheric neutrons, whose fluxes and energies dependent on the altitude, the sun activity and the geographic coordinates, have been identified as being capable to provoke SEE (Single Event Effects), by indirect ionisation, in integrated devices issued from advanced manufacturing processes (nano-metric devices). This concerns not only avionics but also applications operating at ground level. The evaluation of the sensitivity to SEE provoked by natural radiation becomes thus a mandatory step during the selection of devices devoted to be included in applications requiring high reliability. The sensitivity to SEE can be mitigated by different approaches at different levels from manufacturing level (use of particular process technologies such as SOI - Silicon On Isolator -) to the system level (hardware/software redundancy). Independently of the adopted hardening approach, the so-called radiation ground testing are mandatory to evaluate the error rates of a device or a system. During such tests, the DUT (Device Under Test) is exposed to a flux of particles while it performs a given activity. For SEU (Single Event Upsets) radiation ground testing, two main strategies exist: static test: the circuit areas which are supposed to be sensitive to SEUs (registers, memories,...) are initialized with a reference pattern. The content of the sensitive area is periodically compared to the reference pattern to identify potential SEU. Dynamic test: the DUT performs an activity representative of the one it will execute during the final application. Static test: strategies are frequently adopted as they provide the intrinsic sensitivity, in terms of the average number of particles needed to provoke an SEU, of different sensitive areas of the device. From such a strategy can thus be obtained a 'worst case estimation' of the device sensitivity. This thesis aims at giving a description and validating the methodologies required to estimate the sensitivity to radiations of two types of

  18. Final Report: Advanced Methods for Accessing and Disseminating Nuclear Data, August 13, 1996 - March 15, 1999

    International Nuclear Information System (INIS)

    Stone, Craig A.

    1999-01-01

    Scientific Digital Visions, Inc. developed methods of accessing and dissemination nuclear data contained within the databases of the National Data Center (NNDC) at the Brookhaven National Laboratory supporting a long standing and important DOE Program to provide scientists access to NNDC Databases. The NNDC participated as a partner in this effort

  19. Motivating and Facilitating Advancements in Space Weather Real-Time Data Availability: Factors, Data, and Access Methods

    Science.gov (United States)

    Pankratz, C. K.; Baker, D. N.; Jaynes, A. N.; Elkington, S. R.; Baltzer, T.; Sanchez, F.

    2017-12-01

    Society's growing reliance on complex and highly interconnected technological systems makes us increasingly vulnerable to the effects of space weather events - maybe more than for any other natural hazard. An extreme solar storm today could conceivably impact hundreds of the more than 1400 operating Earth satellites. Such an extreme storm could cause collapse of the electrical grid on continental scales. The effects on navigation, communication, and remote sensing of our home planet could be devastating to our social functioning. Thus, it is imperative that the scientific community address the question of just how severe events might become. At least as importantly, it is crucial that policy makers and public safety officials be informed by the facts on what might happen during extreme conditions. This requires essentially real-time alerts, warnings, and also forecasts of severe space weather events, which in turn demands measurements, models, and associated data products to be available via the most effective data discovery and access methods possible. Similarly, advancement in the fundamental scientific understanding of space weather processes is also vital, requiring that researchers have convenient and effective access to a wide variety of data sets and models from multiple sources. The space weather research community, as with many scientific communities, must access data from dispersed and often uncoordinated data repositories to acquire the data necessary for the analysis and modeling efforts that advance our understanding of solar influences and space physics on the Earth's environment. The Laboratory for Atmospheric and Space Physics (LASP), as a leading institution in both producing data products and advancing the state of scientific understanding of space weather processes, is well positioned to address many of these issues. In this presentation, we will outline the motivating factors for effective space weather data access, summarize the various data

  20. A pilot study of the experience of family caregivers of patients with advanced pancreatic cancer using a mixed methods approach.

    Science.gov (United States)

    Sherman, Deborah W; McGuire, Deborah B; Free, David; Cheon, Joo Young

    2014-09-01

    Pancreatic cancer presents a wide spectrum of significant symptomatology. The high symptom burden, coupled with a rapidly fatal diagnosis, limits preparation or time for adjustment for both patients and their family caregivers. From the initial diagnosis and throughout the illness experience, the physical and emotional demands of caregiving can predispose caregivers themselves to illness and a greater risk of mortality. Understanding the negative and positive aspects of caregiving for patients with advanced pancreatic cancer will inform interventions that promote positive caregiver outcomes and support caregivers in their role. To provide feasibility data for a larger, mixed methods, longitudinal study focused on the experience of family caregivers of patients with advanced pancreatic cancer and preliminary qualitative data to substantiate the significance of studying this caregiver population. This was a mixed methods study guided by the Stress Process Model. Eight family caregivers of patients with advanced pancreatic cancer from oncology practices of a university-affiliated medical center were surveyed. The pilot results supported the ability to recruit and retain participants and informed recruitment and data collection procedures. The qualitative results provided preliminary insights into caregiver experiences during the diagnosis and treatment phases. Key findings that substantiated the significance of studying these caregivers included the caregiving context of the history of sentinel symptoms, the crisis of diagnosis, the violation of assumptions about life and health, recognition of the circle of association, and contextual factors, as well as primary and secondary stressors, coping strategies, resources, discoveries, gains and growth, associated changes/transitions, and unmet caregiver needs. Findings indicated caregivers' willingness to participate in research, highlighted the negative and positive aspects of the caregiver experience, and reinforced the

  1. Effects of Different Palliative Jaundice Reducing Methods on Immunologic Functions in Patients with Advanced Malignant Obstructive Jaundice.

    Science.gov (United States)

    Tang, Kun; Sui, Lu-Lu; Xu, Gang; Zhang, Tong; Liu, Qiang; Liu, Xiao-Fang

    2017-08-01

    This study aimed to investigate the effects of three treatment methods on the immunological function of patients with advanced malignant obstructive jaundice (MOJ). Patients with advanced MOJ were randomly divided into three groups according to biliary drainage methods. Detection of levels of multi-indices were investigated in different time periods. After drainage, the levels of complement 3 (C3) and complement 4 (C4) were increased. Forteen days post-operation, the levels of immunoglobulin G (IgG), immunoglobulin A (IgA) and immunoglobulin M (IgM) in the group undergoing palliative surgery decreased significantly compared to those in both percutaneous transhepatic cholangio drainage (PTCD) and endoscopic retrograde biliary drainage (ERBD) groups. The level of serum endotoxin in the group undergoing palliative surgery decreased gradually. Palliative surgery for reducing jaundice is superior to PTCD and ERBD in improving immune function of patients with MOJ. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  2. Advanced Research and Data Methods in Women's Health: Big Data Analytics, Adaptive Studies, and the Road Ahead.

    Science.gov (United States)

    Macedonia, Christian R; Johnson, Clark T; Rajapakse, Indika

    2017-02-01

    Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.

  3. Advances in intelligent diagnosis methods for pulmonary ground-glass opacity nodules.

    Science.gov (United States)

    Yang, Jing; Wang, Hailin; Geng, Chen; Dai, Yakang; Ji, Jiansong

    2018-02-07

    Pulmonary nodule is one of the important lesions of lung cancer, mainly divided into two categories of solid nodules and ground glass nodules. The improvement of diagnosis of lung cancer has significant clinical significance, which could be realized by machine learning techniques. At present, there have been a lot of researches focusing on solid nodules. But the research on ground glass nodules started late, and lacked research results. This paper summarizes the research progress of the method of intelligent diagnosis for pulmonary nodules since 2014. It is described in details from four aspects: nodular signs, data analysis methods, prediction models and system evaluation. This paper aims to provide the research material for researchers of the clinical diagnosis and intelligent analysis of lung cancer, and further improve the precision of pulmonary ground glass nodule diagnosis.

  4. Recent advances in sample preparation methods for analysis of endocrine disruptors from various matrices.

    Science.gov (United States)

    Singh, Baljinder; Kumar, Ashwini; Malik, Ashok Kumar

    2014-01-01

    Due to the high toxicity of endocrine disruptors (EDs), studies are being undertaken to design effective techniques for separation and detection of EDs in various matrices. Recently, research activities in this area have shown that a diverse range of chromatographic techniques are available for the quantification and analysis of EDs. Therefore, on the basis of significant, recent original publications, we aimed at providing an overview of different separation and detection methods for the determination of trace-level concentrations of selected EDs. The biological effects of EDs and current pretreatment techniques applied to EDs are also discussed. Various types of chromatographic techniques are presented for quantification, highlighting time- and cost-effective techniques that separate and quantify trace levels of multiple EDs from various environmental matrices. Reports related to methods for the quantification of EDs from various matrices primarily published since 2008 have been cited.

  5. The convergence problem for dissipative autonomous systems classical methods and recent advances

    CERN Document Server

    Haraux, Alain

    2015-01-01

    The book investigates classical and more recent methods of study for the asymptotic behavior of dissipative continuous dynamical systems with applications to ordinary and partial differential equations, the main question being convergence (or not) of the solutions to an equilibrium. After reviewing the basic concepts of topological dynamics and the definition of gradient-like systems on a metric space, the authors present a comprehensive exposition of stability theory relying on the so-called linearization method. For the convergence problem itself, when the set of equilibria is infinite, the only general results that do not require very special features of the non-linearities are presently consequences of a gradient inequality discovered by S. Lojasiewicz. The application of this inequality jointly with the so-called Liapunov-Schmidt reduction requires a rigorous exposition of Semi-Fredholm operator theory and the theory of real analytic maps on infinite dimensional Banach spaces, which cannot be found anywh...

  6. WAG (water-alternating-gas) as a method for petroleum advanced recovering

    International Nuclear Information System (INIS)

    Campozana, Fernando P.; Mato, Luiz F.

    2000-01-01

    Water-Alternating-Gas (WAG) injection is an oil recovery method that has been more and more applied worldwide. Oil recovery has been increased up to 20 % (over conventional waterflooding) in field-scale WAG projects. This additional recovery has been attributed to improved sweep and areal efficiency as well as microscopic displacement efficiency. Field results have shown that not only WAG method combines the advantages of gas and water injection but also leads to more stable fronts and better mobility control. Moreover, three-phase flow usually leads to a lower residual oil saturation when compared to that of two-phase flow. In this study, we show some theoretical aspects of WAG as well as some results obtained from numerical simulation of a pilot project to be implemented in Aracas field, Bahia, Brazil. (author)

  7. High Energy Beam Impacts on Beam Intercepting Devices: Advanced Numerical Methods and Experimental Set-up

    CERN Document Server

    Bertarelli, A; Carra, F; Cerutti, F; Dallocchio, A; Mariani, N; Timmins, M; Peroni, L; Scapin, M

    2011-01-01

    Beam Intercepting Devices are potentially exposed to severe accidental events triggered by direct impacts of energetic particle beams. State-of-the-art numerical methods are required to simulate the behaviour of affected components. A review of the different dynamic response regimes is presented, along with an indication of the most suited tools to treat each of them. The consequences on LHC tungsten collimators of a number of beam abort scenarios were extensively studied, resorting to a novel category of numerical explicit methods, named Hydrocodes. Full shower simulations were performed providing the energy deposition distribution. Structural dynamics and shock wave propagation analyses were carried out with varying beam parameters, identifying important thresholds for collimator operation, ranging from the onset of permanent damage up to catastrophic failure. Since the main limitation of these tools lies in the limited information available on constitutive material models under extreme conditions, a dedica...

  8. High Energy Beam Impacts on Beam Intercepting Devices: Advanced Numerical Methods and Experimental Set-Up

    CERN Document Server

    Bertarelli, A; Carra, F; Cerutti, F; Dallocchio, A; Mariani, N; Timmins, M; Peroni, L; Scapin, M

    2011-01-01

    Beam Intercepting Devices are potentially exposed to severe accidental events triggered by direct impacts of energetic particle beams. State-of-the-art numerical methods are required to simulate the behaviour of affected components. A review of the different dynamic response regimes is presented, along with an indication of the most suited tools to treat each of them. The consequences on LHC tungsten collimators of a number of beam abort scenarios were extensively studied, resorting to a novel category of numerical explicit methods, named Hydrocodes. Full shower simulations were performed providing the energy deposition distribution. Structural dynamics and shock wave propagation analyses were carried out with varying beam parameters, identifying important thresholds for collimator operation, ranging from the onset of permanent damage up to catastrophic failure. Since the main limitation of these tools lies in the limited information available on constitutive material models under extreme conditions, a dedica...

  9. International Symposium on Boundary Element Methods : Advances in Solid and Fluid Mechanics

    CERN Document Server

    Tseng, Kadin

    1990-01-01

    The Boundary Element Method (BEM) has become established as an effective tool for the solutions of problems in engineering science. The salient features of the BEM have been well documented in the open literature and therefore will not be elaborated here. The BEM research has progressed rapidly, especially in the past decade and continues to evolve worldwide. This Symposium was organized to provide an international forum for presentation of current research in BEM for linear and nonlinear problems in solid and fluid mechanics and related areas. To this end, papers on the following topics were included: rotary­ wing aerodynamics, unsteady aerodynamics, design and optimization, elasticity, elasto­ dynamics and elastoplasticity, fracture mechanics, acoustics, diffusion and wave motion, thermal analysis, mathematical aspects and boundary/finite element coupled methods. A special session was devoted to parallel/vector supercomputing with emphasis on mas­ sive parallelism. This Symposium was sponsored by United ...

  10. Advanced Corrections of Hydrogen Bonding and Dispersion for Semiempirical Quantum Mechanical Methods

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Hobza, Pavel

    2012-01-01

    Roč. 8, č. 1 (2012), s. 141-151 ISSN 1549-9618 Grant - others:European Social Fund(XE) CZ.1.05/2.1.00/03.0058 Institutional research plan: CEZ:AV0Z40550506 Keywords : tight-binding method * noncovalent complexes * base -pairs * interaction energies Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012

  11. Safety and reliability of pressure components with special emphasis on advanced methods of NDT. Vol. 2

    International Nuclear Information System (INIS)

    1986-01-01

    The 12 papers discuss topics of strength and safety in the field of materials technology and engineering. Conclusions for NPP component safety and materials are drawn. Measurements and studies relate to fracture mechanics methods (oscillation, burst, material strength, characteristics). The dynamic analysis of the behaviour of large test specimens, the influence of load velocity on crack resistance curve and the development of forged parts from austenitic steel for fast breeder reactors are presented. (DG) [de

  12. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances

    Directory of Open Access Journals (Sweden)

    Abut F

    2015-08-01

    Full Text Available Fatih Abut, Mehmet Fatih AkayDepartment of Computer Engineering, Çukurova University, Adana, TurkeyAbstract: Maximal oxygen uptake (VO2max indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance

  13. Safety and reliability of pressure components with special emphasis on advanced methods of NDT. Vol. 1

    International Nuclear Information System (INIS)

    1986-01-01

    24 papers discuss various methods for nondestructive testing of materials, e.g. eddy current measurement, EMAG analyser, tomography, ultrasound, holographic interferometry, and optical sound field camera. Special consideration is given to mathematical programmes and tests allowing to determine fracture-mechanical parameters and to assess cracks in various components, system parts and individual specimens both in pressurized systems and NPP systems. Studies focus on weld seams and adjacent areas. (DG) [de

  14. Extraction, Analytical and Advanced Methods for Detection of Allura Red AC (E129 in Food and Beverages Products

    Directory of Open Access Journals (Sweden)

    Shafiquzzaman eSiddiquee

    2016-05-01

    Full Text Available Allura Red AC (E129 is an azo dye that widely used in drinks, juices, bakery, meat and sweets products. High consumption of Allura Red has claimed an adverse effects of human health including allergies, food intolerance, cancer, multiple sclerosis, attention deficit hyperactivity disorder (ADHD, brain damage, nausea, cardiac disease and asthma due to the reaction of aromatic azo compounds (R = R' = aromatic. Several countries have banned and strictly controlled the uses of Allura Red in food and beverage products. This review paper is critically summarized on the available analytical and advanced methods for determination of Allura Red and also concisely discussed on the acceptable daily intake (ADI, toxicology and extraction methods.

  15. Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel

    Science.gov (United States)

    Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.

    2014-01-01

    Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.

  16. Advancing density functional theory to finite temperatures: methods and applications in steel design.

    Science.gov (United States)

    Hickel, T; Grabowski, B; Körmann, F; Neugebauer, J

    2012-02-08

    The performance of materials such as steels, their high strength and formability, is based on an impressive variety of competing mechanisms on the microscopic/atomic scale (e.g. dislocation gliding, solid solution hardening, mechanical twinning or structural phase transformations). Whereas many of the currently available concepts to describe these mechanisms are based on empirical and experimental data, it becomes more and more apparent that further improvement of materials needs to be based on a more fundamental level. Recent progress for methods based on density functional theory (DFT) now makes the exploration of chemical trends, the determination of parameters for phenomenological models and the identification of new routes for the optimization of steel properties feasible. A major challenge in applying these methods to a true materials design is, however, the inclusion of temperature-driven effects on the desired properties. Therefore, a large range of computational tools has been developed in order to improve the capability and accuracy of first-principles methods in determining free energies. These combine electronic, vibrational and magnetic effects as well as structural defects in an integrated approach. Based on these simulation tools, one is now able to successfully predict mechanical and thermodynamic properties of metals with a hitherto not achievable accuracy.

  17. Advanced organic analysis and analytical methods development: FY 1995 progress report. Waste Tank Organic Safety Program

    International Nuclear Information System (INIS)

    Wahl, K.L.; Campbell, J.A.; Clauss, S.A.

    1995-09-01

    This report describes the work performed during FY 1995 by Pacific Northwest Laboratory in developing and optimizing analysis techniques for identifying organics present in Hanford waste tanks. The main focus was to provide a means for rapidly obtaining the most useful information concerning the organics present in tank waste, with minimal sample handling and with minimal waste generation. One major focus has been to optimize analytical methods for organic speciation. Select methods, such as atmospheric pressure chemical ionization mass spectrometry and matrix-assisted laser desorption/ionization mass spectrometry, were developed to increase the speciation capabilities, while minimizing sample handling. A capillary electrophoresis method was developed to improve separation capabilities while minimizing additional waste generation. In addition, considerable emphasis has been placed on developing a rapid screening tool, based on Raman and infrared spectroscopy, for determining organic functional group content when complete organic speciation is not required. This capability would allow for a cost-effective means to screen the waste tanks to identify tanks that require more specialized and complete organic speciation to determine tank safety

  18. Advanced Magnetic Materials Methods and Numerical Models for Fluidization in Microgravity and Hypogravity

    Science.gov (United States)

    Atwater, James; Wheeler, Richard, Jr.; Akse, James; Jovanovic, Goran; Reed, Brian

    2013-01-01

    To support long-duration manned missions in space such as a permanent lunar base, Mars transit, or Mars Surface Mission, improved methods for the treatment of solid wastes, particularly methods that recover valuable resources, are needed. The ability to operate under microgravity and hypogravity conditions is essential to meet this objective. The utilization of magnetic forces to manipulate granular magnetic media has provided the means to treat solid wastes under variable gravity conditions by filtration using a consolidated magnetic media bed followed by thermal processing of the solid wastes in a fluidized bed reactor. Non-uniform magnetic fields will produce a magnetic field gradient in a bed of magnetically susceptible media toward the distributor plate of a fluidized bed reactor. A correctly oriented magnetic field gradient will generate a downward direct force on magnetic media that can substitute for gravitational force in microgravity, or which may augment low levels of gravity, such as on the Moon or Mars. This approach is termed Gradient Magnetically Assisted Fluidization (G-MAFB), in which the magnitude of the force on the fluidized media depends upon the intensity of the magnetic field (H), the intensity of the field gradient (dH/dz), and the magnetic susceptibility of the media. Fluidized beds based on the G-MAFB process can operate in any gravitational environment by tuning the magnetic field appropriately. Magnetic materials and methods have been developed that enable G-MAFB operation under variable gravity conditions.

  19. Advancing Reactive Tracer Methods for Measurement of Thermal Evolution in Geothermal Reservoirs: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell A. Plummer; Carl D. Palmer; Earl D. Mattson; Laurence C. Hull; George D. Redden

    2011-07-01

    The injection of cold fluids into engineered geothermal system (EGS) and conventional geothermal reservoirs may be done to help extract heat from the subsurface or to maintain pressures within the reservoir (e.g., Rose et al., 2001). As these injected fluids move along fractures, they acquire heat from the rock matrix and remove it from the reservoir as they are extracted to the surface. A consequence of such injection is the migration of a cold-fluid front through the reservoir (Figure 1) that could eventually reach the production well and result in the lowering of the temperature of the produced fluids (thermal breakthrough). Efficient operation of an EGS as well as conventional geothermal systems involving cold-fluid injection requires accurate and timely information about thermal depletion of the reservoir in response to operation. In particular, accurate predictions of the time to thermal breakthrough and subsequent rate of thermal drawdown are necessary for reservoir management, design of fracture stimulation and well drilling programs, and forecasting of economic return. A potential method for estimating migration of a cold front between an injection well and a production well is through application of reactive tracer tests, using chemical whose rate of degradation is dependent on the reservoir temperature between the two wells (e.g., Robinson 1985). With repeated tests, the rate of migration of the thermal front can be determined, and the time to thermal breakthrough calculated. While the basic theory behind the concept of thermal tracers has been understood for some time, effective application of the method has yet to be demonstrated. This report describes results of a study that used several methods to investigate application of reactive tracers to monitoring the thermal evolution of a geothermal reservoir. These methods included (1) mathematical investigation of the sensitivity of known and hypothetical reactive tracers, (2) laboratory testing of novel

  20. First 3D thermal mapping of an active volcano using an advanced photogrammetric method

    Science.gov (United States)

    Antoine, Raphael; Baratoux, David; Lacogne, Julien; Lopez, Teodolina; Fauchard, Cyrille; Bretar, Frédéric; Arab-Sedze, Mélanie; Staudacher, Thomas; Jacquemoud, Stéphane; Pierrot-Deseilligny, Marc

    2014-05-01

    Thermal infrared data obtained in the [7-14 microns] spectral range are usually used in many Earth Science disciplines. These studies are exclusively based on the analysis of 2D information. In this case, a quantitative analysis of the surface energy budget remains limited, as it may be difficult to estimate the radiative contribution of the topography, the thermal influence of winds on the surface or potential imprints of subsurface flows on the soil without any precise DEM. The draping of a thermal image on a recent DEM is a common method to obtain a 3D thermal map of a surface. However, this method has many disadvantages i) errors can be significant in the orientation process of the thermal images, due to the lack of tie points between the images and the DEM; ii) the use of a recent DEM implies the use of another remote sensing technique to quantify the topography; iii) finally, the characterization of the evolution of a surface requires the simultaneous acquisition of thermal data and topographic information, which may be expensive in most cases. The stereophotogrammetry method allows to reconstitute the relief of an object from photos taken from different positions. Recently, substantial progress have been realized in the generation of high spatial resolution topographic surfaces using stereophotogrammetry. However, the presence of shadows, homogeneous textures and/or weak contrasts in the visible spectrum (e.g., flowing lavas, uniform lithologies) may prevent from the use of such method, because of the difficulties to find tie points on each image. Such situations are more favorable in the thermal infrared spectrum, as any variation in the thermal properties or geometric orientation of the surfaces may induce temperature contrasts that are detectable with a thermal camera. This system, usually functioning with a array sensor (Focal Plane Array) and an optical device, have geometric characteristics that are similar to digital cameras. Thus, it may be possible

  1. Advancing Research Methods to Detect Impact of Climate Change on Health in Grand'Anse, Haiti

    Science.gov (United States)

    Barnhart, S.; Coq, R. N.; Frederic, R.; DeRiel, E.; Camara, H.; Barnhart, K. R.

    2013-12-01

    Haiti is considered particularly vulnerable to the effects of climate change, but directly linking climate change to health effects is limited by the lack of robust data and the multiple determinants of health. Worsening storms and rising temperatures in this rugged country with high poverty is likely to adversely affect economic activity, population growth and other determinants of health. For the past two years, the Univ. of Washington has supported the public hospital in the department of Grand'Anse. Grand'Anse, a relatively contained region in SW Haiti with an area of 11,912 km2, is predominantly rural with a population of 350,000 and is bounded to the south by peaks up to 2,347 m. Grand'Anse would serve as an excellent site to assess the interface between climate change and health. The Demographic and Health Survey (DHS) shows health status is low relative to other countries. Estimates of climate change for Jeremie, the largest city in Grand'Anse, predict the mean monthly temperature will increase from 26.1 to 27.3 oC while mean monthly rainfall will decrease from 80.5 to 73.5 mm over the next 60 years. The potential impact of these changes ranges from threatening food security to greater mortality. Use of available secondary data such as indicators of climate change and DHS health status are not likely to offer sufficient resolution to detect positive or negative impacts of climate change on health. How might a mixed methods approach incorporating secondary data and quantitative and qualitative survey data on climate, economic activity, health and determinants of health address the hypothesis: Climate change does not adversely affect health? For example, in Haiti most women deliver at home. Maternal mortality is high at 350 deaths/100,000 deliveries. This compares to deliveries in facilities where the median rate is less than 100/100,000. Thus, maternal mortality is closely linked to access to health care in this rugged mountainous country. Climate change

  2. An advanced method to assess the diet of free-ranging large carnivores based on scats.

    Directory of Open Access Journals (Sweden)

    Bettina Wachter

    Full Text Available BACKGROUND: The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1. When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2. No previous study controlled for this second bias. METHODOLOGY/PRINCIPAL FINDINGS: Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus, to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. CONCLUSION/SIGNIFICANCE: Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.

  3. Advances in Spectral Methods for UQ in Incompressible Navier-Stokes Equations

    KAUST Repository

    Le Maitre, Olivier

    2014-01-06

    In this talk, I will present two recent contributions to the development of efficient methodologies for uncertainty propagation in the incompressible Navier-Stokes equations. The first one concerns the reduced basis approximation of stochastic steady solutions, using Proper Generalized Decompositions (PGD). An Arnoldi problem is projected to obtain a low dimensional Galerkin problem. The construction then amounts to the resolution of a sequence of uncoupled deterministic Navier-Stokes like problem and simple quadratic stochastic problems, followed by the resolution of a low-dimensional coupled quadratic stochastic problem, with a resulting complexity which has to be contrasted with the dimension of the whole Galerkin problem for classical spectral approaches. An efficient algorithm for the approximation of the stochastic pressure field is also proposed. Computations are presented for uncertain viscosity and forcing term to demonstrate the effectiveness of the reduced method. The second contribution concerns the computation of stochastic periodic solutions to the Navier-Stokes equations. The objective is to circumvent the well-known limitation of spectral methods for long-time integration. We propose to directly determine the stochastic limit-cycles through the definition of its stochastic period and an initial condition over the cycle. A modified Newton method is constructed to compute iteratively both the period and initial conditions. Owing to the periodic character of the solution, and by introducing an appropriate time-scaling, the solution can be approximated using low-degree polynomial expansions with large computational saving as a result. The methodology is illustrated for the von-Karman flow around a cylinder with stochastic inflow conditions.

  4. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances.

    Science.gov (United States)

    Abut, Fatih; Akay, Mehmet Fatih

    2015-01-01

    Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.

  5. Advances in Spectral Methods for UQ in Incompressible Navier-Stokes Equations

    KAUST Repository

    Le Maitre, Olivier

    2014-01-01

    In this talk, I will present two recent contributions to the development of efficient methodologies for uncertainty propagation in the incompressible Navier-Stokes equations. The first one concerns the reduced basis approximation of stochastic steady solutions, using Proper Generalized Decompositions (PGD). An Arnoldi problem is projected to obtain a low dimensional Galerkin problem. The construction then amounts to the resolution of a sequence of uncoupled deterministic Navier-Stokes like problem and simple quadratic stochastic problems, followed by the resolution of a low-dimensional coupled quadratic stochastic problem, with a resulting complexity which has to be contrasted with the dimension of the whole Galerkin problem for classical spectral approaches. An efficient algorithm for the approximation of the stochastic pressure field is also proposed. Computations are presented for uncertain viscosity and forcing term to demonstrate the effectiveness of the reduced method. The second contribution concerns the computation of stochastic periodic solutions to the Navier-Stokes equations. The objective is to circumvent the well-known limitation of spectral methods for long-time integration. We propose to directly determine the stochastic limit-cycles through the definition of its stochastic period and an initial condition over the cycle. A modified Newton method is constructed to compute iteratively both the period and initial conditions. Owing to the periodic character of the solution, and by introducing an appropriate time-scaling, the solution can be approximated using low-degree polynomial expansions with large computational saving as a result. The methodology is illustrated for the von-Karman flow around a cylinder with stochastic inflow conditions.

  6. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    Science.gov (United States)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.

  7. Development of advanced methods for analysis of experimental data in diffusion

    Science.gov (United States)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix

  8. Advanced x-ray stress analysis method for a single crystal using different diffraction plane families

    International Nuclear Information System (INIS)

    Imafuku, Muneyuki; Suzuki, Hiroshi; Sueyoshi, Kazuyuki; Akita, Koichi; Ohya, Shin-ichi

    2008-01-01

    Generalized formula of the x-ray stress analysis for a single crystal with unknown stress-free lattice parameter was proposed. This method enables us to evaluate the plane stress states with any combination of diffraction planes. We can choose and combine the appropriate x-ray sources and diffraction plane families, depending on the sample orientation and the apparatus, whenever diffraction condition is satisfied. The analysis of plane stress distributions in an iron single crystal was demonstrated combining with the diffraction data for Fe{211} and Fe{310} plane families

  9. Roadmap on biosensing and photonics with advanced nano-optical methods

    KAUST Repository

    Di Fabrizio, Enzo M.

    2016-05-10

    This roadmap, through the contributions of ten groups worldwide, contains different techniques, methods and materials devoted to sensing in nanomedicine. Optics is used in different ways in the detection schemes. Raman, fluorescence and infrared spectroscopies, plasmonics, second harmonic generation and optical tweezers are all used in applications from single molecule detection (both in highly diluted and in highly concentrated solutions) to single cell manipulation. In general, each optical scheme, through device miniaturization and electromagnetic field localization, exploits an intrinsic optical enhancement mechanism in order to increase the sensitivity and selectivity of the device with respect to the complex molecular construct. The materials used for detection include nanoparticles and nanostructures fabricated with different 2D and 3D lithographic methods. It is shown that sensitivity to a single molecule is already accessible whether the system under study is a single cell or a multitude of cells in a molecular mixture. Throughout the roadmap there is an attempt to foresee and to suggest future directions in this interdisciplinary field. © 2016 IOP Publishing Ltd.

  10. Advances in the replacement and enhanced replacement method in QSAR and QSPR theories.

    Science.gov (United States)

    Mercader, Andrew G; Duchowicz, Pablo R; Fernández, Francisco M; Castro, Eduardo A

    2011-07-25

    The selection of an optimal set of molecular descriptors from a much greater pool of such regression variables is a crucial step in the development of QSAR and QSPR models. The aim of this work is to further improve this important selection process. For this reason three different alternatives for the initial steps of our recently developed enhanced replacement method (ERM) and replacement method (RM) are proposed. These approaches had previously proven to yield near optimal results with a much smaller number of linear regressions than the full search. The algorithms were tested on four different experimental data sets, formed by collections of 116, 200, 78, and 100 experimental records from different compounds and 1268, 1338, 1187, and 1306 molecular descriptors, respectively. The comparisons showed that one of the new alternatives further improves the ERM, which has shown to be superior to genetic algorithms for the selection of an optimal set of molecular descriptors from a much greater pool. The new proposed alternative also improves the simpler and the lower computational demand algorithm RM.

  11. Development of advanced methods for planning electric energy distribution systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Goenen, T.; Foote, B.L.; Thompson, J.C.; Fagan, J.E.

    1979-10-01

    An extensive search was made for the identification and collection of reports published in the open literature which describes distribution planning methods and techniques. In addition, a questionnaire has been prepared and sent to a large number of electric power utility companies. A large number of these companies were visited and/or their distribution planners interviewed for the identification and description of distribution system planning methods and techniques used by these electric power utility companies and other commercial entities. Distribution systems planning models were reviewed and a set of new mixed-integer programming models were developed for the optimal expansion of the distribution systems. The models help the planner to select: (1) optimum substation locations; (2) optimum substation expansions; (3) optimum substation transformer sizes; (4) optimum load transfers between substations; (5) optimum feeder routes and sizes subject to a set of specified constraints. The models permit following existing right-of-ways and avoid areas where feeders and substations cannot be constructed. The results of computer runs were analyzed for adequacy in serving projected loads within regulation limits for both normal and emergency operation.

  12. Advances in industrial biopharmaceutical batch process monitoring: Machine-learning methods for small data problems.

    Science.gov (United States)

    Tulsyan, Aditya; Garvin, Christopher; Ündey, Cenk

    2018-04-06

    Biopharmaceutical manufacturing comprises of multiple distinct processing steps that require effective and efficient monitoring of many variables simultaneously in real-time. The state-of-the-art real-time multivariate statistical batch process monitoring (BPM) platforms have been in use in recent years to ensure comprehensive monitoring is in place as a complementary tool for continued process verification to detect weak signals. This article addresses a longstanding, industry-wide problem in BPM, referred to as the "Low-N" problem, wherein a product has a limited production history. The current best industrial practice to address the Low-N problem is to switch from a multivariate to a univariate BPM, until sufficient product history is available to build and deploy a multivariate BPM platform. Every batch run without a robust multivariate BPM platform poses risk of not detecting potential weak signals developing in the process that might have an impact on process and product performance. In this article, we propose an approach to solve the Low-N problem by generating an arbitrarily large number of in silico batches through a combination of hardware exploitation and machine-learning methods. To the best of authors' knowledge, this is the first article to provide a solution to the Low-N problem in biopharmaceutical manufacturing using machine-learning methods. Several industrial case studies from bulk drug substance manufacturing are presented to demonstrate the efficacy of the proposed approach for BPM under various Low-N scenarios. © 2018 Wiley Periodicals, Inc.

  13. Roadmap on biosensing and photonics with advanced nano-optical methods

    KAUST Repository

    Di Fabrizio, Enzo M.; Schlü cker, Sebastian; Wenger, Jé rô me; Regmi, Raju; Rigneault, Hervé ; Calafiore, Giuseppe; West, Melanie; Cabrini, Stefano; Fleischer, Monika; van Hulst, Niek F; Garcia-Parajo, Maria F; Pucci, Annemarie; Cojoc, Dan; Hauser, Charlotte; Ni, Ming

    2016-01-01

    This roadmap, through the contributions of ten groups worldwide, contains different techniques, methods and materials devoted to sensing in nanomedicine. Optics is used in different ways in the detection schemes. Raman, fluorescence and infrared spectroscopies, plasmonics, second harmonic generation and optical tweezers are all used in applications from single molecule detection (both in highly diluted and in highly concentrated solutions) to single cell manipulation. In general, each optical scheme, through device miniaturization and electromagnetic field localization, exploits an intrinsic optical enhancement mechanism in order to increase the sensitivity and selectivity of the device with respect to the complex molecular construct. The materials used for detection include nanoparticles and nanostructures fabricated with different 2D and 3D lithographic methods. It is shown that sensitivity to a single molecule is already accessible whether the system under study is a single cell or a multitude of cells in a molecular mixture. Throughout the roadmap there is an attempt to foresee and to suggest future directions in this interdisciplinary field. © 2016 IOP Publishing Ltd.

  14. Advances in digital technology and orthodontics: a reference to the Invisalign method.

    Science.gov (United States)

    Melkos, Aristides B

    2005-05-01

    Increased aesthetic demands during orthodontic treatment resulted in several treatment alternatives. However, the need to avoid conventional fixed orthodontic appliances led, with the use of computer-aided scanning, imaging, and manufacturing technology, to the development of new therapy concepts such as Invisalign. The Invisalign orthodontic technique involves a series of clear removable appliances and has been applied to correct a variety of malocclusions. The Invisalign method is an aesthetic orthodontic option for many patients, but it is suited mainly to adults or adolescents who have a fully erupted dentition and it has its indications and limitations. It handles simple to moderate non-extraction alignments better than mild to moderate extraction cases. The aligners are clear and therefore aesthetically ideal for the patient; they are comfortable to wear and, as they are removable, they provide simplicity of care and better oral hygiene. They also allow the evaluation of treatment options in detail before beginning treatment by using a virtual treatment model. It is also important to point out that this method has some disadvantages, which are associated with patient compliance, limited control over specific tooth movements, and additional documentation time. The Invisalign concept is an aesthetic alternative in orthodontic treatment, with advantages and disadvantages. It can be utilized to treat simple to moderate alignment cases, especially in adults, and serves as an additional part of the armamentarium of the orthodontist.

  15. An advanced three-phase physical, experimental and numerical method for tsunami induced boulder transport

    Science.gov (United States)

    Oetjen, Jan; Engel, Max; Prasad Pudasaini, Shiva; Schüttrumpf, Holger; Brückner, Helmut

    2017-04-01

    Coasts around the world are affected by high-energy wave events like storm surges or tsunamis depending on their regional climatological and geological settings. By focusing on tsunami impacts, we combine the abilities and experiences of different scientific fields aiming at improved insights of near- and onshore tsunami hydrodynamics. We investigate the transport of coarse clasts - so called boulders - due to tsunami impacts by a multi-methodology approach of numerical modelling, laboratory experiments, and sedimentary field records. Coupled numerical hydrodynamic and boulder transport models (BTM) are widely applied for analysing the impact characteristics of the transport by tsunami, such as wave height and flow velocity. Numerical models able to simulate past tsunami events and the corresponding boulder transport patterns with high accuracy and acceptable computational effort can be utilized as powerful forecasting models predicting the impact of a coast approaching tsunami. We have conducted small-scale physical experiments in the tilting flume with real shaped boulder models. Utilizing the structure from motion technique (Westoby et al., 2012) we reconstructed real boulders from a field study on the Island of Bonaire (Lesser Antilles, Caribbean Sea, Engel & May, 2012). The obtained three-dimensional boulder meshes are utilized for creating downscaled replica of the real boulder for physical experiments. The results of the irregular shaped boulder are compared to experiments with regular shaped boulder models to achieve a better insight about the shape related influence on transport patterns. The numerical model is based on the general two-phase mass flow model by Pudasaini (2012) enhanced for boulder transport simulations. The boulder is implemented using the immersed boundary technique (Peskin, 2002) and the direct forcing approach. In this method Cartesian grids (fluid and particle phase) and Lagrangian meshes (boulder) are combined. By applying the

  16. The Rheological Behavior of Multiphase Liquids Characterized by an Advanced Dilatometric Method

    Science.gov (United States)

    Helo, C. S.; Hess, K.; Potuzak, M.; Dingwell, D. B.

    2006-12-01

    Silicic volcanic rocks often contain a glassy groundmatrix. During formation of the glass when the liquid line of descent intersects the glass transition interval, all magmatic processes (e.g. crystallization, nucleation of bubbles) are frozen in. A corresponding `glass transition temperature' Tg can be determined by measuring changes in the temperature dependence of enthalpy (H) or volume (V) of the sample during subsequent reheating. The glass transition temperature is strongly dependent on the cooling rate which the sample experienced when the glass matrix was formed. Especially for silica-rich rocks the calorimetric analysis is usually hampered by the loss of the characteristic peaks in the time derivative curves, due to the presence of phenocrysts, microlites, or bubbles. However, it is possible to extract two characteristic features from the dilatometric dV/dt curve (in contrast to dH/dt), a dilatometrical `onset' and a `softening' temperature (Tg on and Tg soft, respectively). The method has been calibrated using a series well-known synthetic melt compositions ranging in NBO/T (non-bridging oxygen NBO over tetrahedral coordinated cations T) from 0 - 0.6. For these melts the viscosities at Tg on and Tg soft have been calculated. The viscosity at Tg on shows no compositional dependence on composition at fixed cooling and heating rates, and the precision is comparable to the conventional dilatometrical and calorimetrical measurements. The determination of Tg soft is even more precise (only approx. half the standard deviation) in contrast to the standard methods. Moreover, with one single measurement it is possible to determine directly the fragility of the melt, defined as the temperature dependence of the structural relaxation time at Tg on, by using a new parameter Fv = (Tg soft - Tg on) / Tg on. Measurements on selected pristine samples from several silicic volcanic centers have shown different trends. Throughout repeated heating / cooling with fixed rates

  17. Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods

    International Nuclear Information System (INIS)

    Procaccia, H.; Villain, B.; Clarotti, C.A.

    1996-01-01

    EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL'94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors)

  18. Assessment of Advanced Life Support competence when combining different test methods--reliability and validity

    DEFF Research Database (Denmark)

    Ringsted, C; Lippert, F; Hesselfeldt, R

    2007-01-01

    Cardiac Arrest Simulation Test (CASTest) scenarios for the assessments according to guidelines 2005. AIMS: To analyse the reliability and validity of the individual sub-tests provided by ERC and to find a combination of MCQ and CASTest that provides a reliable and valid single effect measure of ALS...... that possessed high reliability, equality of test sets, and ability to discriminate between the two groups of supposedly different ALS competence. CONCLUSIONS: ERC sub-tests of ALS competence possess sufficient reliability and validity. A combined ALS score with equal weighting of one MCQ and one CASTest can...... competence. METHODS: Two groups of participants were included in this randomised, controlled experimental study: a group of newly graduated doctors, who had not taken the ALS course (N=17) and a group of students, who had passed the ALS course 9 months before the study (N=16). Reliability in terms of inter...

  19. Advanced nuclear power plant regulation using risk-informed and performance-based methods

    International Nuclear Information System (INIS)

    Modarres, Mohammad

    2009-01-01

    This paper proposes and discusses implications of a largely probabilistic regulatory framework using best-estimate, goal-driven, risk-informed, and performance-based methods. This framework relies on continuous probabilistic assessment of performance of a set of time-dependent, safety-critical systems, structures, components, and procedures that assure attainment of a broad set of overarching technology-neutral protective, mitigative, and preventive goals under all phases of plant operations. In this framework acceptable levels of performance are set through formal apportionment so that they are commensurate with the overarching goals. Regulatory acceptance would be the based on the confidence level with which the plant conforms to these goals and performance objectives. The proposed framework uses the traditional defense-in-depth design and operation regulatory philosophy when uncertainty in conforming to specific goals and objectives is high. Finally, the paper discusses the steps needed to develop a corresponding technology-neutral regulatory approach from the proposed framework

  20. Advancement in shampoo (a dermal care product): preparation methods, patents and commercial utility.

    Science.gov (United States)

    Deeksha; Malviya, Rishabha; Sharma, Pramod K

    2014-01-01

    Shampoo is a cleaning aid for hair and is the most evolving beauty products in the present scenario. Today's shampoo products are of great importance as they provide cleaning of hair with the benefits of conditioning, smoothing and good health of hair i.e. dandruff, dirt, grease and lice free hair. Various types of shampoos depending upon function, nature of ingredient, and their special effects are elaborated in this study. Generally shampoos are evaluated in terms of physical appearance, detergency, surface tension, foam quality, pH, viscosity, and percent of solid content, flow property, dirt dispersion, cleaning action, stability and wetting time. The attention should be paid at its patent portion which attracts towards itself as it provides wide knowledge related to shampoo. This article reviews the various aspects of shampoo in terms of preparation methods, various patents and commercial value.

  1. Advanced experimental method for self-optimizing control system to a new energy converce plant

    International Nuclear Information System (INIS)

    Vasiliev, V.V.

    1992-01-01

    The progress in the development and studying of new methods of producing electric energy, based on direct conversion of heat, light, fuel or chemical energy into electric energy, raises the problem of more effective use of their power characteristics. In this paper, disclosure is made of a self-optimizing control system for an abject with a unimodal quality function. The system comprises an object, a divider, a band-pass filter, an averaging filter, a multiplier, a final control element, and adder and further includes a search signal generator. The fashion and the system are presented in the USSR No. 684510, in the USA No. 4179730, in France No. 2386854, in Germany No. 2814963, in Japan No. 1369882

  2. Advances on the molecular characterization, clinical relevance, and detection methods of Gadiform parvalbumin allergens.

    Science.gov (United States)

    Fernandes, Telmo J R; Costa, Joana; Carrapatoso, Isabel; Oliveira, Maria Beatriz P P; Mafra, Isabel

    2017-10-13

    Gadiform order includes several fish families, from which Gadidae and Merlucciidae are part of, comprising the most commercially important and highly appreciated fish species, such as cod, pollock, haddock, and hake. Parvalbumins, classified as calcium-binding proteins, are considered the main components involved in the majority of fish allergies. Nine and thirteen parvalbumins were identified in different fish species from Gadidae and Merlucciidae families, respectively. This review intends to describe their molecular characterization and the clinical relevance, as well as the prevalence of fish allergy. In addition, the main protein- and DNA-based methods to detect fish allergens are fully reviewed owing to their importance in the safeguard of sensitized/allergic individuals.

  3. Direct methods for limit and shakedown analysis of structures advanced computational algorithms and material modelling

    CERN Document Server

    Pisano, Aurora; Weichert, Dieter

    2015-01-01

    Articles in this book examine various materials and how to determine directly the limit state of a structure, in the sense of limit analysis and shakedown analysis. Apart from classical applications in mechanical and civil engineering contexts, the book reports on the emerging field of material design beyond the elastic limit, which has further industrial design and technological applications. Readers will discover that “Direct Methods” and the techniques presented here can in fact be used to numerically estimate the strength of structured materials such as composites or nano-materials, which represent fruitful fields of future applications.   Leading researchers outline the latest computational tools and optimization techniques and explore the possibility of obtaining information on the limit state of a structure whose post-elastic loading path and constitutive behavior are not well defined or well known. Readers will discover how Direct Methods allow rapid and direct access to requested information in...

  4. INVESTIGATIONS OF THE FLOW INTO A STORAGE TANK BY MEANS OF ADVANCED EXPERIMENTAL AND THEORETICAL METHODS

    DEFF Research Database (Denmark)

    Jordan, Ulrike; Shah, Louise Jivan; Furbo, Simon

    2003-01-01

    that the luminescence intensity depends on the water temperature, the temperature fields in the tank can be visualized and also be recorded with a camera. The measurements were compared with calculations of the flow and temperature fields carried out with the Computational Fluid Dynamics (CFD) tool Fluent. In future...... is to study the influence of the inlet device geometry and of the operating conditions (the flow rate, draw-off volume, and temperatures) on the thermal stratification in the tank. Measurements of the flow and temperature fields were carried out with two visualization techniques: - To visualize the flow field...... a method called Particle Image Velocimetry (PIV) was applied. Particles with a size of 1 to 10 mm were seeded in the water and then illuminated by a laser within a narrow plane. In order to measure the three velocity components of the flow within the plane, the particle displacements between laser pulses...

  5. Advances in developing a new test method to assess spray drift potential from air blast sprayers

    Energy Technology Data Exchange (ETDEWEB)

    Grella, M.; Gil, E.; Balsari, P.; Marucco, P.; Gallart, M.

    2017-07-01

    Drift is one of the most important issues to consider for realising sustainable pesticide sprays. This study proposes and tests an alternative methodology for quantifying the drift potential (DP) of air blast sprayers, trying to avoid the difficulties faced in conducting field trials according to the standard protocol (ISO 22866:2005). For this purpose, an ad hoc test bench designed for DP comparative measurements was used. The proposed methodology was evaluated in terms of robustness, repetitiveness and coherence by arranging a series of trials at two laboratories. Representative orchard and vineyard air blast sprayers in eight configurations (combination of two forward speeds, two air fan flow rates, and two nozzle types) were tested. The test bench was placed perpendicular to the spray track to collect the fraction of spray liquid remaining in the air after the spray process and potentially susceptible to drift out of the treated area. Downwind spray deposition curves were obtained and a new approach was proposed to calculate an index value of the DP estimation that could allow the differences among the tested configurations to be described. Results indicated that forward speed of 1.67 m/s allows better discrimination among configurations tested. Highest DP reduction, over 87.5%, was achieved using the TVI nozzles in combination with low air fan flow rate in both laboratories; conversely, the highest DP value was obtained with the ATR nozzles in combination with high air fan flow rate. Although the proposed method shows a promising potential to evaluate drift potential of different sprayer types and nozzles types used for bush and tree crops further research and tests are necessary to improve and validate this method.

  6. Method and Process Development of Advanced Atmospheric Plasma Spraying for Thermal Barrier Coatings

    Science.gov (United States)

    Mihm, Sebastian; Duda, Thomas; Gruner, Heiko; Thomas, Georg; Dzur, Birger

    2012-06-01

    Over the last few years, global economic growth has triggered a dramatic increase in the demand for resources, resulting in steady rise in prices for energy and raw materials. In the gas turbine manufacturing sector, process optimizations of cost-intensive production steps involve a heightened potential of savings and form the basis for securing future competitive advantages in the market. In this context, the atmospheric plasma spraying (APS) process for thermal barrier coatings (TBC) has been optimized. A constraint for the optimization of the APS coating process is the use of the existing coating equipment. Furthermore, the current coating quality and characteristics must not change so as to avoid new qualification and testing. Using experience in APS and empirically gained data, the process optimization plan included the variation of e.g. the plasma gas composition and flow-rate, the electrical power, the arrangement and angle of the powder injectors in relation to the plasma jet, the grain size distribution of the spray powder and the plasma torch movement procedures such as spray distance, offset and iteration. In particular, plasma properties (enthalpy, velocity and temperature), powder injection conditions (injection point, injection speed, grain size and distribution) and the coating lamination (coating pattern and spraying distance) are examined. The optimized process and resulting coating were compared to the current situation using several diagnostic methods. The improved process significantly reduces costs and achieves the requirement of comparable coating quality. Furthermore, a contribution was made towards better comprehension of the APS of ceramics and the definition of a better method for future process developments.

  7. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities.

    Science.gov (United States)

    Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (Pbiometrics, SCHOL, and BMI, and particularly highlight the value of the

  8. Advancing New 3D Seismic Interpretation Methods for Exploration and Development of Fractured Tight Gas Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    James Reeves

    2005-01-31

    In a study funded by the U.S. Department of Energy and GeoSpectrum, Inc., new P-wave 3D seismic interpretation methods to characterize fractured gas reservoirs are developed. A data driven exploratory approach is used to determine empirical relationships for reservoir properties. Fractures are predicted using seismic lineament mapping through a series of horizon and time slices in the reservoir zone. A seismic lineament is a linear feature seen in a slice through the seismic volume that has negligible vertical offset. We interpret that in regions of high seismic lineament density there is a greater likelihood of fractured reservoir. Seismic AVO attributes are developed to map brittle reservoir rock (low clay) and gas content. Brittle rocks are interpreted to be more fractured when seismic lineaments are present. The most important attribute developed in this study is the gas sensitive phase gradient (a new AVO attribute), as reservoir fractures may provide a plumbing system for both water and gas. Success is obtained when economic gas and oil discoveries are found. In a gas field previously plagued with poor drilling results, four new wells were spotted using the new methodology and recently drilled. The wells have estimated best of 12-months production indicators of 2106, 1652, 941, and 227 MCFGPD. The latter well was drilled in a region of swarming seismic lineaments but has poor gas sensitive phase gradient (AVO) and clay volume attributes. GeoSpectrum advised the unit operators that this location did not appear to have significant Lower Dakota gas before the well was drilled. The other three wells are considered good wells in this part of the basin and among the best wells in the area. These new drilling results have nearly doubled the gas production and the value of the field. The interpretation method is ready for commercialization and gas exploration and development. The new technology is adaptable to conventional lower cost 3D seismic surveys.

  9. Advanced methods for the analysis, design, and optimization of SMA-based aerostructures

    International Nuclear Information System (INIS)

    Hartl, D J; Lagoudas, D C; Calkins, F T

    2011-01-01

    Engineers continue to apply shape memory alloys to aerospace actuation applications due to their high energy density, robust solid-state actuation, and silent and shock-free operation. Past design and development of such actuators relied on experimental trial and error and empirically derived graphical methods. Over the last two decades, however, it has been repeatedly demonstrated that existing SMA constitutive models can capture stabilized SMA transformation behaviors with sufficient accuracy. This work builds upon past successes and suggests a general framework by which predictive tools can be used to assess the responses of many possible design configurations in an automated fashion. By applying methods of design optimization, it is shown that the integrated implementation of appropriate analysis tools can guide engineers and designers to the best design configurations. A general design optimization framework is proposed for the consideration of any SMA component or assembly of such components that applies when the set of design variables includes many members. This is accomplished by relying on commercially available software and utilizing tools already well established in the design optimization community. Such tools are combined with finite element analysis (FEA) packages that consider a multitude of structural effects. The foundation of this work is a three-dimensional thermomechanical constitutive model for SMAs applicable for arbitrarily shaped bodies. A reduced-order implementation also allows computationally efficient analysis of structural components such as wires, rods, beams and shells. The use of multiple optimization schemes, the consideration of assembled components, and the accuracy of the implemented constitutive model in full and reduced-order forms are all demonstrated

  10. NERI PROJECT 99-119. TASK 1. ADVANCED CONTROL TOOLS AND METHODS. FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    March-Leuba, J.A.

    2002-09-09

    Nuclear plants of the 21st century will employ higher levels of automation and fault tolerance to increase availability, reduce accident risk, and lower operating costs. Key developments in control algorithms, fault diagnostics, fault tolerance, and communication in a distributed system are needed to implement the fully automated plant. Equally challenging will be integrating developments in separate information and control fields into a cohesive system, which collectively achieves the overall goals of improved performance, safety, reliability, maintainability, and cost-effectiveness. Under the Nuclear Energy Research Initiative (NERI), the U. S. Department of Energy is sponsoring a project to address some of the technical issues involved in meeting the long-range goal of 21st century reactor control systems. This project, ''A New Paradigm for Automated Development Of Highly Reliable Control Architectures For Future Nuclear Plants,'' involves researchers from Oak Ridge National Laboratory, University of Tennessee, and North Carolina State University. This paper documents a research effort to develop methods for automated generation of control systems that can be traced directly to the design requirements. Our final goal is to allow the designer to specify only high-level requirements and stress factors that the control system must survive (e.g. a list of transients, or a requirement to withstand a single failure.) To this end, the ''control engine'' automatically selects and validates control algorithms and parameters that are optimized to the current state of the plant, and that have been tested under the prescribed stress factors. The control engine then automatically generates the control software from validated algorithms. Examples of stress factors that the control system must ''survive'' are: transient events (e.g., set-point changes, or expected occurrences such a load rejection,) and postulated

  11. Advanced validation of CFD-FDTD combined method using highly applicable solver for reentry blackout prediction

    International Nuclear Information System (INIS)

    Takahashi, Yusuke

    2016-01-01

    An analysis model of plasma flow and electromagnetic waves around a reentry vehicle for radio frequency blackout prediction during aerodynamic heating was developed in this study. The model was validated based on experimental results from the radio attenuation measurement program. The plasma flow properties, such as electron number density, in the shock layer and wake region were obtained using a newly developed unstructured grid solver that incorporated real gas effect models and could treat thermochemically non-equilibrium flow. To predict the electromagnetic waves in plasma, a frequency-dependent finite-difference time-domain method was used. Moreover, the complicated behaviour of electromagnetic waves in the plasma layer during atmospheric reentry was clarified at several altitudes. The prediction performance of the combined model was evaluated with profiles and peak values of the electron number density in the plasma layer. In addition, to validate the models, the signal losses measured during communication with the reentry vehicle were directly compared with the predicted results. Based on the study, it was suggested that the present analysis model accurately predicts the radio frequency blackout and plasma attenuation of electromagnetic waves in plasma in communication. (paper)

  12. Recent advances in thermoluminescence and photostimulated luminescence detection methods for irradiated foods

    International Nuclear Information System (INIS)

    Sanderson, D.C.W.; Carmichael, L.A.; Naylor, J.D.

    1996-01-01

    Thermoluminescence (TL) and photostimulated luminescence (PSL) are radiation-specific phenomena resulting from energy storage by trapped charge carriers in dielectric materials following irradiation. Releasing such stored energy by thermal or optical stimulation can result in detectable luminescence emission during the relaxation processes which follow. These approaches can be applied to inorganic components present either as inherent parts of foods or as adhering contaminants, and to bio-inorganic systems. The strengths of these techniques lies in their radiation-specificity, and the wide range of sample types which may by analysed. The Scottish Universities Research and Reactor Centre (SURRC) has been involved in the development and application of luminescence methods since 1986, during which time over 4000 analyses of more than 800 different food samples have been performed for research purposes, or in support of UK food labelling regulations. This paper discusses the present scope of luminescence techniques, and identifies areas where recent work has extended the range of applications, and indicates areas where further investigations may be worthwhile. (author)

  13. Advanced Neutron Detection Methods: new Tools for Countering Nuclear Terrorism (412th Brookhaven Lecture)

    International Nuclear Information System (INIS)

    Vanier, Peter

    2006-01-01

    Acts of terrorism have become almost daily occurrences in the international news. Yet one of the most feared types of terrorism - nuclear terrorism - has not yet happened. One important way of preventing nuclear terrorism is to safeguard nuclear materials, and many people worldwide work continuously to achieve that goal. A second, vital defense is being developed: greatly improved methods of detecting material that a nuclear terrorist would need so that timely discovery of the material could become more probable. Special nuclear materials can emit neutrons, either spontaneously or when excited by a source of high-energy gamma rays, such as an electron accelerator. Traditional neutron detectors can sense these neutrons, but not the direction from which the neutrons come, or their energy. The odds against finding smuggled nuclear materials using conventional detectors are great. However, innovative designs of detectors are producing images that show the locations and even the shapes of man-made neutron sources, which stand out against the uniform background produced by cosmic rays. With the new detectors, finding needles in haystacks - or smuggled nuclear materials in a huge container among thousands of others in a busy port - suddenly becomes possible.

  14. Nucleic acid hybridization assays employing dA-tailed capture probes. II. Advanced multiple capture methods

    International Nuclear Information System (INIS)

    Hunsaker, W.R.; Badri, H.; Lombardo, M.; Collins, M.L.

    1989-01-01

    A fourth capture is added to the reversible target capture procedure. This results in an improved radioisotopic detection limit of 7.3 x 10(-21) mol of target. In addition, the standard triple capture method is converted into a nonradioactive format with a detection limit of under 1 amol of target. The principal advantage of nonradioactive detection is that the entire assay can be performed in about 1 h. Nucleic acids are released from cells in the presence of the (capture probe) which contains a 3'-poly(dA) sequence and the (labeled probe) which contains a detectable nonradioactive moiety such as biotin. After a brief hybridization in solution, the target is captured on oligo(dT) magnetic particles. The target is further purified from sample impurities and excess labeled probe by recapture either once or twice more on fresh magnetic particles. The highly purified target is then concentrated to 200 nl by recapture onto a poly(dT) nitrocellulose filter and rapidly detected with streptavidin-alkaline phosphatase using bromochloroindolyl phosphate and nitroblue tetrazolium. Using this procedure, as little as 0.25 amol of a target plasmid has been detected nonradioactively in crude samples in just 1 h without prior purification of the DNA and RNA. Finally, a new procedure called background capture is introduced to complement the background-reducing power of RTC

  15. On Advanced Control Methods toward Power Capture and Load Mitigation in Wind Turbines

    Institute of Scientific and Technical Information of China (English)

    Yuan Yuan; Jiong Tang

    2017-01-01

    This article provides a survey of recently emerged methods for wind turbine control.Multivariate control approaches to the optimization of power capture and the reduction of loads in components under time-varying turbulent wind fields have been under extensive investigation in recent years.We divide the related research activities into three categories:modeling and dynamics of wind turbines,active control of wind turbines,and passive control of wind turbines.Regarding turbine dynamics,we discuss the physical fundamentals and present the aeroelastic analysis tools.Regarding active control,we review pitch control,torque control,and yaw control strategies encompassing mathematical formulations as well as their applications toward different objectives.Our survey mostly focuses on blade pitch control,which is considered one of the key elements in facilitating load reduction while maintaining power capture performance.Regarding passive control,we review techniques such as tuned mass dampers,smart rotors,and microtabs.Possible future directions are suggested.

  16. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  17. Advanced cutting, welding and inspection methods for vacuum vessel assembly and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Jones, L. E-mail: jonesl@ipp.mgg.de; Alfile, J.-P.; Aubert, Ph.; Punshon, C.; Daenner, W.; Kujanpaeae, V.; Maisonnier, D.; Serre, M.; Schreck, G.; Wykes, M

    2000-11-01

    ITER requires a 316 l stainless steel, double-skinned vacuum vessel (VV), each shell being 60 mm thick. EFDA (European Fusion Development Agreement) is investigating methods to be used for performing welding and NDT during VV assembly and also cutting and re-welding for remote sector replacement, including the development of an Intersector Welding Robot (IWR) [Jones et al. This conference]. To reduce the welding time, distortions and residual stresses of conventional welding, previous work concentrated on CO{sub 2} laser welding and cutting processes [Jones et al. Proc. Symp. Fusion Technol., Marseilles, 1998]. NdYAG laser now provides the focus for welding of the rearside root and for completing the weld for overhead positions with multipass filling. Electron beam (E-beam) welding with local vacuum offers a single-pass for most of the weld depth except for overhead positions. Plasma cutting has shown the capability to contain the backside dross and preliminary work with NdYAG laser cutting has shown good results. Automated ultrasonic inspection of assembly welds will be improved by the use of a phased array probe system that can focus the beam for accurate flaw location and sizing. This paper describes the recent results of process investigations in this R and D programme, involving five European sites and forming part of the overall VV/blanket research effort [W. Daenner et al. This conference].

  18. An advanced boundary element method (BEM) implementation for the forward problem of electromagnetic source imaging

    International Nuclear Information System (INIS)

    Akalin-Acar, Zeynep; Gencer, Nevzat G

    2004-01-01

    The forward problem of electromagnetic source imaging has two components: a numerical model to solve the related integral equations and a model of the head geometry. This study is on the boundary element method (BEM) implementation for numerical solutions and realistic head modelling. The use of second-order (quadratic) isoparametric elements and the recursive integration technique increase the accuracy in the solutions. Two new formulations are developed for the calculation of the transfer matrices to obtain the potential and magnetic field patterns using realistic head models. The formulations incorporate the use of the isolated problem approach for increased accuracy in solutions. If a personal computer is used for computations, each transfer matrix is calculated in 2.2 h. After this pre-computation period, solutions for arbitrary source configurations can be obtained in milliseconds for a realistic head model. A hybrid algorithm that uses snakes, morphological operations, region growing and thresholding is used for segmentation. The scalp, skull, grey matter, white matter and eyes are segmented from the multimodal magnetic resonance images and meshes for the corresponding surfaces are created. A mesh generation algorithm is developed for modelling the intersecting tissue compartments, such as eyes. To obtain more accurate results quadratic elements are used in the realistic meshes. The resultant BEM implementation provides more accurate forward problem solutions and more efficient calculations. Thus it can be the firm basis of the future inverse problem solutions

  19. Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection

    Directory of Open Access Journals (Sweden)

    Han Yih Lau

    2017-12-01

    Full Text Available Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care diagnostic methods for applications in plant disease detection. Polymerase chain reaction (PCR is the most common DNA amplification technology used for detecting various plant and animal pathogens. However, subsequent to PCR based assays, several types of nucleic acid amplification technologies have been developed to achieve higher sensitivity, rapid detection as well as suitable for field applications such as loop-mediated isothermal amplification, helicase-dependent amplification, rolling circle amplification, recombinase polymerase amplification, and molecular inversion probe. The principle behind these technologies has been thoroughly discussed in several review papers; herein we emphasize the application of these technologies to detect plant pathogens by outlining the advantages and disadvantages of each technology in detail.

  20. Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods

    Science.gov (United States)

    Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan

    2016-01-01

    The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.

  1. Advancing internal erosion monitoring using seismic methods in field and laboratory studies

    Science.gov (United States)

    Parekh, Minal L.

    This dissertation presents research involving laboratory and field investigation of passive and active methods for monitoring and assessing earthen embankment infrastructure such as dams and levees. Internal erosion occurs as soil particles in an earthen structure migrate to an exit point under seepage forces. This process is a primary failure mode for dams and levees. Current dam and levee monitoring practices are not able to identify early stages of internal erosion, and often the result is loss of structure utility and costly repairs. This research contributes to innovations for detection and monitoring by studying internal erosion and monitoring through field experiments, laboratory experiments, and social and political framing. The field research in this dissertation included two studies (2009 and 2012) of a full-scale earthen embankment at the IJkdijk in the Netherlands. In both of these tests, internal erosion occurred as evidenced by seepage followed by sand traces and boils, and in 2009, eventual failure. With the benefit of arrays of closely spaced piezometers, pore pressure trends indicated internal erosion near the initiation time. Temporally and spatially dense pore water pressure measurements detected two pore water pressure transitions characteristic to the development of internal erosion, even in piezometers located away from the backward erosion activity. At the first transition, the backward erosion caused anomalous pressure decrease in piezometers, even under constant or increasing upstream water level. At the second transition, measurements stabilized as backward erosion extended further upstream of the piezometers, as shown in the 2009 test. The transitions provide an indication of the temporal development and the spatial extent of backward erosion. The 2012 IJkdijk test also included passive acoustic emissions (AE) monitoring. This study analyzed AE activity over the course of the 7-day test using a grid of geophones installed on the

  2. Sensitivity analysis of exergy destruction in a real combined cycle power plant based on advanced exergy method

    International Nuclear Information System (INIS)

    Boyaghchi, Fateme Ahmadi; Molaie, Hanieh

    2015-01-01

    Highlights: • The advanced exergy destruction components of a real CCPP are calculated. • The TIT and r c variation are investigated on exergy destruction parts of the cycle. • The TIT and r c growth increase the improvement potential in the most of components. • The TIT and r c growth decrease the unavoidable part in some components. - Abstract: The advanced exergy analysis extends engineering knowledge beyond the respective conventional methods by improving the design and operation of energy conversion systems. In advanced exergy analysis, the exergy destruction is splitting into endogenous/exogenous and avoidable/unavoidable parts. In this study, an advanced exergy analysis of a real combined cycle power plant (CCPP) with supplementary firing is done. The endogenous/exogenous irreversibilities of each component as well as their combination with avoidable/unavoidable irreversibilities are determined. A parametric study is presented discussing the sensitivity of various performance indicators to the turbine inlet temperature (TIT), and compressor pressure ratio (r c ). It is observed that the thermal and exergy efficiencies increase when TIT and r c rise. Results show that combustion chamber (CC) concentrates most of the exergy destruction (more than 62%), dominantly in unavoidable endogenous form which is decreased by 11.89% and 13.12% while the avoidable endogenous exergy destruction increase and is multiplied by the factors of 1.3 and 8.6 with increasing TIT and r c , respectively. In addition, TIT growth strongly increases the endogenous avoidable exergy destruction in high pressure superheater (HP.SUP), CC and low pressure evaporator (LP.EVAP). It, also, increases the exogenous avoidable exergy destruction of HP.SUP and low pressure steam turbine (LP.ST) and leads to the high decrement in the endogenous exergy destruction of the preheater (PRE) by about 98.8%. Furthermore, r c growth extremely rises the endogenous avoidable exergy destruction of gas

  3. Advanced Engineering Methods for Assessing Welding Distortion in Aero-Engine Assemblies

    International Nuclear Information System (INIS)

    Jackson, Kathryn; Darlington, Roger

    2011-01-01

    Welding remains an attractive fabrication method for aero-engine assemblies, offering high production rates and reduced total cost, particularly for large complex assemblies. However, distortion generated during the welding process continues to provide a major challenge in terms of the control of geometric tolerances and residual stress. The welding distortion is influenced by the sequence and position of joints, the clamping configuration and the design of the assembly. For large complex assemblies the range of these options may be large. Hence the use of numerical simulation at an early stage of the product development process is valuable to enable a wide range of these factors to be explored with the aim of minimising welding distortions before production commences, and thereby reducing the product development time. In this paper, a new technique for simulation of welding distortions based on a shrinkage analysis is evaluated for an aero-engine assembly. The shrinkage simulations were built and solved using the ESI Group software Weld Planner. The rapid simulation speed enabled a wide range of welding plans to be explored, leading to recommendations for the fabrication process. The sensitivity of the model to mesh size and material properties is reported. The results of the shrinkage analysis were found to be similar to those of a transient analysis generated using ESI Group software SysWeld. The solution times were found to be significantly lower for the shrinkage analysis than the transient analysis. Hence it has been demonstrated that shrinkage analysis is a valuable tool for exploring the fabrication process of a welded assembly at an early stage of the product development process.

  4. Advancing cognitive engineering methods to support user interface design for electronic health records.

    Science.gov (United States)

    Thyvalikakath, Thankam P; Dziabiak, Michael P; Johnson, Raymond; Torres-Urquidy, Miguel Humberto; Acharya, Amit; Yabes, Jonathan; Schleyer, Titus K

    2014-04-01

    Despite many decades of research on the effective development of clinical systems in medicine, the adoption of health information technology to improve patient care continues to be slow, especially in ambulatory settings. This applies to dentistry as well, a primary care discipline with approximately 137,000 practitioners in the United States. A critical reason for slow adoption is the poor usability of clinical systems, which makes it difficult for providers to navigate through the information and obtain an integrated view of patient data. In this study, we documented the cognitive processes and information management strategies used by dentists during a typical patient examination. The results will inform the design of a novel electronic dental record interface. We conducted a cognitive task analysis (CTA) study to observe ten general dentists (five general dentists and five general dental faculty members, each with more than two years of clinical experience) examining three simulated patient cases using a think-aloud protocol. Dentists first reviewed the patient's demographics, chief complaint, medical history and dental history to determine the general status of the patient. Subsequently, they proceeded to examine the patient's intraoral status using radiographs, intraoral images, hard tissue and periodontal tissue information. The results also identified dentists' patterns of navigation through patient's information and additional information needs during a typical clinician-patient encounter. This study reinforced the significance of applying cognitive engineering methods to inform the design of a clinical system. Second, applying CTA to a scenario closely simulating an actual patient encounter helped with capturing participants' knowledge states and decision-making when diagnosing and treating a patient. The resultant knowledge of dentists' patterns of information retrieval and review will significantly contribute to designing flexible and task

  5. Methods of Advanced Wound Management for Care of Combined Traumatic and Chemical Warfare Injuries

    Science.gov (United States)

    Graham, John S.; Gerlach, Travis W.; Logan, Thomas P.; Bonar, James P.; Fugo, Richard J.; Lee, Robyn B.; Coatsworth, Matthew A.

    2008-01-01

    Objective: Chemical warfare agents are potential threats to military personnel and civilians. The potential for associated traumatic injuries is significant. Damage control surgery could expose medical personnel to agents contaminating the wounds. The objectives of this study were to demonstrate efficacy of surgical decontamination and assess exposure risk to attending personnel. Methods: Weanling pigs were randomly assigned to 2 of 4 debridement tools (scalpel, Bovie® knife, Fugo Blade®, and Versajet™ Hydrosurgery System). Penetrating traumatic wounds were created over the shoulder and thigh and then exposed to liquid sulfur mustard (HD) for 60 minutes. Excisional debridement of the injuries was performed while vapors over each site were collected. Gas chromatography was used to measure HD in samples of collected vapors. Unbound HD was quantified in presurgical wound swabs, excised tissues, and peripheral tissue biopsies following solvent extraction. Results: Excisional debridement produced agent-free wound beds (surgical decontamination). A significant amount of HD vapor was detected above the surgical fields with each tool. Apart from the Versajet™ producing significantly lower levels of HD detected over thigh wounds compared with those treated using the scalpel, there were no differences in the amount of agent detected among the tools. All measured levels significantly exceeded established safety limits. Vesicating levels of unbound HD were extracted from excised tissue. There was no measured lateral spreading of HD beyond the surgical margins. Conclusions: There is significant occupational exposure risk to HD during surgical procedures designed to stabilize agent-contaminated wounds. If appropriate protective measures are taken, surgical decontamination is both effective and safe. PMID:18716652

  6. Consideration of a design optimization method for advanced nuclear power plant thermal-hydraulic components

    International Nuclear Information System (INIS)

    Ridluan, Artit; Tokuhiro, Akira; Manic, Milos; Patterson, Michael; Danchus, William

    2009-01-01

    In order to meet the global energy demand and also mitigate climate change, we anticipate a significant resurgence of nuclear power in the next 50 years. Globally, Generation III plants (ABWR) have been built; Gen' III+ plants (EPR, AP1000 others) are anticipated in the near term. The U.S. DOE and Japan are respectively pursuing the NGNP and MSFR. There is renewed interest in closing the fuel cycle and gradually introducing the fast reactor into the LWR-dominated global fleet. In order to meet Generation IV criteria, i.e. thermal efficiency, inherent safety, proliferation resistance and economic competitiveness, plant and energy conversion system engineering design have to increasingly meet strict design criteria with reduced margin for reliable safety and uncertainties. Here, we considered a design optimization approach using an anticipated NGNP thermal system component as a Case Study. A systematic, efficient methodology is needed to reduce time consuming trial-and-error and computationally-intensive analyses. We thus developed a design optimization method linking three elements; that is, benchmarked CFD used as a 'design tool', artificial neural networks (ANN) to accommodate non-linear system behavior and enhancement of the 'design space', and finally, response surface methodology (RSM) to optimize the design solution with targeted constraints. The paper presents the methodology including guiding principles, an integration of CFD into design theory and practice, consideration of system non-linearities (such as fluctuating operating conditions) and systematic enhancement of the design space via application of ANN, and a stochastic optimization approach (RSM) with targeted constraints. Results from a Case Study optimizing the printed circuit heat exchanger for the NGNP energy conversion system will be presented. (author)

  7. Advanced magnetic resonance imaging methods for planning and monitoring radiation therapy in patients with high-grade glioma.

    Science.gov (United States)

    Lupo, Janine M; Nelson, Sarah J

    2014-10-01

    This review explores how the integration of advanced imaging methods with high-quality anatomical images significantly improves the characterization, target definition, assessment of response to therapy, and overall management of patients with high-grade glioma. Metrics derived from diffusion-, perfusion-, and susceptibility-weighted magnetic resonance imaging in conjunction with magnetic resonance spectroscopic imaging, allows us to characterize regions of edema, hypoxia, increased cellularity, and necrosis within heterogeneous tumor and surrounding brain tissue. Quantification of such measures may provide a more reliable initial representation of tumor delineation and response to therapy than changes in the contrast-enhancing or T2 lesion alone and have a significant effect on targeting resection, planning radiation, and assessing treatment effectiveness. In the long term, implementation of these imaging methodologies can also aid in the identification of recurrent tumor and its differentiation from treatment-related confounds and facilitate the detection of radiationinduced vascular injury in otherwise normal-appearing brain tissue.

  8. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    International Nuclear Information System (INIS)

    Beaujean, Frederik

    2012-01-01

    Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C 7 , C 9 , and C 10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl + l - operators with real-valued C i . A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K * γ, B→K (*) l + l - , and B s →μ + μ - decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings C i . The

  9. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  10. Methods of preparation and modification of advanced zero-valent iron nanoparticles, their properties and application in water treatment technologies

    Science.gov (United States)

    Filip, Jan; Kašlík, Josef; Medřík, Ivo; Petala, Eleni; Zbořil, Radek; Slunský, Jan; Černík, Miroslav; Stavělová, Monika

    2014-05-01

    Zero-valent iron nanoparticles are commonly used in modern water treatment technologies. Compared to conventionally-used macroscopic iron or iron microparticles, the using of nanoparticles has the advantages given mainly by their generally large specific surface area (it drives their high reactivity and/or sorption capacity), small dimensions (it allows their migration e.g. in ground water), and particular physical and chemical properties. Following the applications of zero-valent iron particles in various pilot tests, there arose several critical suggestions for improvements of used nanomaterials and for development of new generation of reactive nanomaterials. In the presentation, the methods of zero-valent iron nanoparticles synthesis will be summarized with a special attention paid to the thermally-induced solid-state reaction allowing preparation of zero-valent iron nanoparticles in an industrial scale. Moreover, the method of thermal reduction of iron-oxide precursors enables to finely tune the critical parameters (mainly particle size and morphology, specific surface area, surface chemistry of nanoparticles etc.) of resulting zero-valet iron nanoparticles. The most important trends of advanced nanoparticles development will be discussed: (i) surface modification of nanomaterilas, (ii) development of nanocomposites and (iii) development of materials for combined reductive-sorption technologies. Laboratory testing of zero-valent iron nanoparticles reactivity and migration will be presented and compared with the field observations: the advanced zero-valent iron nanoparticles were used for groundwater treatment at the locality contaminated by chlorinated hydrocarbons (VC, DCE, TCE and PCE) and reacted nanoparticles were extracted from the sediments for their fate assessment. The authors gratefully acknowledge the support by the Technology Agency of the Czech Republic "Competence Centres" (project No. TE01020218) and the EU FP7 (project NANOREM).

  11. Advanced methods of process/quality control in nuclear reactor fuel manufacture. Proceedings of a technical committee meeting

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    Nuclear fuel plays an essential role in ensuring the competitiveness of nuclear energy and its acceptance by the public. The economic and market situation is not favorable at present for nuclear fuel designers and suppliers. The reduction in fuel prices (mainly to compete with fossil fuels) and in the number of fuel assemblies to be delivered to customers (mainly due to burnup increase) has been offset by the rising number of safety and other requirements, e.g. the choice of fuel and structural materials and the qualification of equipment. In this respect, higher burnup and thermal rates, longer fuel cycles and the use of MOX fuels are the real means to improve the economics of the nuclear fuel cycle as a whole. Therefore, utilities and fuel vendors have recently initiated new research and development programmes aimed at improving fuel quality, design and materials to produce robust and reliable fuel for safe and reliable reactor operation more demanding conditions. In this connection, improvement of fuel quality occupies an important place and this requires continuous effort on the part of fuel researchers, designers and producers. In the early years of commercial fuel fabrication, emphasis was given to advancements in quality control/quality assurance related mainly to the product itself. Now, the emphasis is transferred to improvements in process control and to implementation of overall total quality management (TQM) programmes. In the area of fuel quality control, statistical methods are now widely implemented, replacing 100% inspection. The IAEA, recognizing the importance of obtaining and maintaining high standards in fuel fabrication, has paid particular attention to this subject. In response to the rapid progress in development and implementation of advanced methods of process/quality control in nuclear fuel manufacture and on the recommendation of the International Working Group on Water Reactor Fuel Performance and Technology, the IAEA conducted a

  12. Advanced methods of process/quality control in nuclear reactor fuel manufacture. Proceedings of a technical committee meeting

    International Nuclear Information System (INIS)

    2000-07-01

    Nuclear fuel plays an essential role in ensuring the competitiveness of nuclear energy and its acceptance by the public. The economic and market situation is not favorable at present for nuclear fuel designers and suppliers. The reduction in fuel prices (mainly to compete with fossil fuels) and in the number of fuel assemblies to be delivered to customers (mainly due to burnup increase) has been offset by the rising number of safety and other requirements, e.g. the choice of fuel and structural materials and the qualification of equipment. In this respect, higher burnup and thermal rates, longer fuel cycles and the use of MOX fuels are the real means to improve the economics of the nuclear fuel cycle as a whole. Therefore, utilities and fuel vendors have recently initiated new research and development programmes aimed at improving fuel quality, design and materials to produce robust and reliable fuel for safe and reliable reactor operation more demanding conditions. In this connection, improvement of fuel quality occupies an important place and this requires continuous effort on the part of fuel researchers, designers and producers. In the early years of commercial fuel fabrication, emphasis was given to advancements in quality control/quality assurance related mainly to the product itself. Now, the emphasis is transferred to improvements in process control and to implementation of overall total quality management (TQM) programmes. In the area of fuel quality control, statistical methods are now widely implemented, replacing 100% inspection. The IAEA, recognizing the importance of obtaining and maintaining high standards in fuel fabrication, has paid particular attention to this subject. In response to the rapid progress in development and implementation of advanced methods of process/quality control in nuclear fuel manufacture and on the recommendation of the International Working Group on Water Reactor Fuel Performance and Technology, the IAEA conducted a

  13. Application of advanced surface and volumetric NDE methods to the detection of cracks in critical regions of turbine blades

    International Nuclear Information System (INIS)

    Porter, J.P.

    1990-01-01

    Advanced NDE inspection techniques capable of detecting small, yet potentially dangerous cracks in turbine blade tenons, blade tie-wire through-holes, trailing edges, and blade root attachment ends have been devised and developed and are now being applied successfully in the field replacing conventional, less-sensitive methods commonly used for crack detection in these blade elements. Under-shroud lateral cracks in tenons are detected ultrasonically by highangle refracted pulse-echo shear wave and 0-degree pitch-catch longitudinal wave methods. Trailing-edge blade cracks and surface-connected cracks in root attachment ends are detected by high frequency eddy current techniques, typically applied remotely using ports in the turbine housing to gain access to the parts under inspection. Cracks emanating from tie-wire holes in blade upper ends are detected by eddy current inspection, which has been found to be a far more effective methods than either magnetic particle or ultrasonic testing for this application. Root attachment ends of side entry blades are inspected volumetrically by ultrasonics, using proprietary coupling techniques that allow examination of heretofore uninspectable regions of blade attachment hooks, known regions of crack initiation. Techniques developed for this collection of applications are described, and the results of actual field inspections are presented and discussed

  14. Endoscopic ultrasound for the characterization and staging of rectal cancer. Current state of the method. Technological advances and perspectives.

    Science.gov (United States)

    Gersak, Mariana M; Badea, Radu; Graur, Florin; Hajja, Nadim Al; Furcea, Luminita; Dudea, Sorin M

    2015-06-01

    Endoscopic ultrasound is the most accurate type of examination for the assessment of rectal tumors. Over the years, the method has advanced from gray-scale examination to intravenous contrast media administration and to different types of elastography. The multimodal approach of tumors (transrectal, transvaginal) is adapted to each case. 3D ultrasound is useful for spatial representation and precise measurement of tumor formations, using CT/MR image reconstruction; color elastography is useful for tumor characterization and staging; endoscopic ultrasound using intravenous contrast agents can help study the amount of contrast agent targeted at the level of the tumor formations and contrast wash-in/wash-out time, based on the curves displayed on the device. The transvaginal approach often allows better visualization of the tumor than the transrectal approach. Performing the procedure with the rectal ampulla distended with contrast agent may be seen as an optimization of the examination methodology. All these aspects are additional methods for gray-scale endoscopic ultrasound, capable of increasing diagnostic accuracy. This paper aims at reviewing the progress of transrectal and transvaginal ultrasound, generically called endoscopic ultrasound, for rectal tumor diagnosis and staging, with emphasis on the current state of the method and its development trends.

  15. Introduction to special section of the Journal of Family Psychology, advances in mixed methods in family psychology: integrative and applied solutions for family science.

    Science.gov (United States)

    Weisner, Thomas S; Fiese, Barbara H

    2011-12-01

    Mixed methods in family psychology refer to the systematic integration of qualitative and quantitative techniques to represent family processes and settings. Over the past decade, significant advances have been made in study design, analytic strategies, and technological support (such as software) that allow for the integration of quantitative and qualitative methods and for making appropriate inferences from mixed methods. This special section of the Journal of Family Psychology illustrates how mixed methods may be used to advance knowledge in family science through identifying important cultural differences in family structure, beliefs, and practices, and revealing patterns of family relationships to generate new measurement paradigms and inform clinical practice. Guidance is offered to advance mixed methods research in family psychology through sound principles of peer review.

  16. Understanding advanced statistical methods

    CERN Document Server

    Westfall, Peter

    2013-01-01

    Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...

  17. Traffic-Related Air Pollution and Childhood Asthma: Recent Advances and Remaining Gaps in the Exposure Assessment Methods.

    Science.gov (United States)

    Khreis, Haneen; Nieuwenhuijsen, Mark J

    2017-03-17

    Background : Current levels of traffic-related air pollution (TRAP) are associated with the development of childhood asthma, although some inconsistencies and heterogeneity remain. An important part of the uncertainty in studies of TRAP-associated asthma originates from uncertainties in the TRAP exposure assessment and assignment methods. In this work, we aim to systematically review the exposure assessment methods used in the epidemiology of TRAP and childhood asthma, highlight recent advances, remaining research gaps and make suggestions for further research. Methods : We systematically reviewed epidemiological studies published up until 8 September 2016 and available in Embase, Ovid MEDLINE (R), and "Transport database". We included studies which examined the association between children's exposure to TRAP metrics and their risk of "asthma" incidence or lifetime prevalence, from birth to the age of 18 years old. Results : We found 42 studies which examined the associations between TRAP and subsequent childhood asthma incidence or lifetime prevalence, published since 1999. Land-use regression modelling was the most commonly used method and nitrogen dioxide (NO₂) was the most commonly used pollutant in the exposure assessments. Most studies estimated TRAP exposure at the residential address and only a few considered the participants' mobility. TRAP exposure was mostly assessed at the birth year and only a few studies considered different and/or multiple exposure time windows. We recommend that further work is needed including e.g., the use of new exposure metrics such as the composition of particulate matter, oxidative potential and ultra-fine particles, improved modelling e.g., by combining different exposure assessment models, including mobility of the participants, and systematically investigating different exposure time windows. Conclusions : Although our previous meta-analysis found statistically significant associations for various TRAP exposures and

  18. Advanced RESTART method for the estimation of the probability of failure of highly reliable hybrid dynamic systems

    International Nuclear Information System (INIS)

    Turati, Pietro; Pedroni, Nicola; Zio, Enrico

    2016-01-01

    The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.

  19. Development of human performance evaluation methods and systems for human factors validation in an advanced control room

    International Nuclear Information System (INIS)

    Ha, Jun Su

    2008-02-01

    Advanced control room (ACR) human-machine interface (HMI) design of advanced nuclear power plants (NPPs) such as APR (advanced power reactor)-1400 can be validated through performance-based tests to determine whether it acceptably supports safe operation of the plant. In this paper, plant performance, personnel task performance, situation awareness, workload, teamwork, and anthropometric/ physiological factor are considered as factors for the human performance evaluation. For development of measures in each of the factors, measures generally used in various industries and empirically proven to be useful are adopted as main measures with some modifications. In addition, helpful measures are developed as complementary measures in order to overcome some of the limitations associated with the main measures. The development of the measures is addressed based on the theoretical and empirical background and also based on the regulatory guidelines. A computerized system, which is called HUPESS (human performance evaluation support system), is developed based on the measures developed in this paper. The development of HUPESS is described with respect to the system configuration, the development process, and integrated measurement, evaluation, and analysis. HUPESS supports evaluators (or experimenters) to effectively measure, analyze, and evaluate the human performance for the HMI design validation in ACRs. Hence HUPESS is expected to be used as an effective tool for the human factors validation in the ACR of Shin Kori 3 and 4 NPPs (APR-1400 type) which are under construction in South-Korea. Also two measures of attentional-resource effectiveness based on cost-benefit analysis are developed. One of them is Fixation to Importance Ratio (FIR) which represents the attentional resources spent on an information source compared to the importance of the information source. The other measure is selective attention effectiveness (SAE) which incorporates the FIRs for all information

  20. Characterization and detection of Vero cells infected with Herpes Simplex Virus type 1 using Raman spectroscopy and advanced statistical methods.

    Science.gov (United States)

    Salman, A; Shufan, E; Zeiri, L; Huleihel, M

    2014-07-01

    Herpes viruses are involved in a variety of human disorders. Herpes Simplex Virus type 1 (HSV-1) is the most common among the herpes viruses and is primarily involved in human cutaneous disorders. Although the symptoms of infection by this virus are usually minimal, in some cases HSV-1 might cause serious infections in the eyes and the brain leading to blindness and even death. A drug, acyclovir, is available to counter this virus. The drug is most effective when used during the early stages of the infection, which makes early detection and identification of these viral infections highly important for successful treatment. In the present study we evaluated the potential of Raman spectroscopy as a sensitive, rapid, and reliable method for the detection and identification of HSV-1 viral infections in cell cultures. Using Raman spectroscopy followed by advanced statistical methods enabled us, with sensitivity approaching 100%, to differentiate between a control group of Vero cells and another group of Vero cells that had been infected with HSV-1. Cell sites that were "rich in membrane" gave the best results in the differentiation between the two categories. The major changes were observed in the 1195-1726 cm(-1) range of the Raman spectrum. The features in this range are attributed mainly to proteins, lipids, and nucleic acids. Copyright © 2014. Published by Elsevier Inc.

  1. New Analytical Methods for the Surface/ Interface and the Micro-Structures in Advanced Nanocomposite Materials by Synchrotron Radiation

    Directory of Open Access Journals (Sweden)

    K. Nakamae

    2010-12-01

    Full Text Available Analytical methods of surface/interface structure and micro-structure in advanced nanocomposite materials by using the synchrotron radiation are introduced. Recent results obtained by the energy-tunable and highly collimated brilliant X-rays, in-situ wide angle/small angle X-ray diffraction with high accuracy are reviewed. It is shown that small angle X-ray scattering is one of the best methods to characterize nanoparticle dispersibility, filler aggregate/agglomerate structures and in-situ observation of hierarchical structure deformation in filled rubber under cyclic stretch. Grazing Incidence(small and wide angle X-ray Scattering are powerful to analyze the sintering process of metal nanoparticle by in-situ observation as well as the orientation of polymer molecules and crystalline orientation at very thin surface layer (ca 7nm of polymer film. While the interaction and conformation of adsorbed molecule at interface can be investigated by using high energy X-ray XPS with Enough deep position (ca 9 micron m.

  2. Surface renewal: an advanced micrometeorological method for measuring and processing field-scale energy flux density data.

    Science.gov (United States)

    McElrone, Andrew J; Shapland, Thomas M; Calderon, Arturo; Fitzmaurice, Li; Paw U, Kyaw Tha; Snyder, Richard L

    2013-12-12

    Advanced micrometeorological methods have become increasingly important in soil, crop, and environmental sciences. For many scientists without formal training in atmospheric science, these techniques are relatively inaccessible. Surface renewal and other flux measurement methods require an understanding of boundary layer meteorology and extensive training in instrumentation and multiple data management programs. To improve accessibility of these techniques, we describe the underlying theory of surface renewal measurements, demonstrate how to set up a field station for surface renewal with eddy covariance calibration, and utilize our open-source turnkey data logger program to perform flux data acquisition and processing. The new turnkey program returns to the user a simple data table with the corrected fluxes and quality control parameters, and eliminates the need for researchers to shuttle between multiple processing programs to obtain the final flux data. An example of data generated from these measurements demonstrates how crop water use is measured with this technique. The output information is useful to growers for making irrigation decisions in a variety of agricultural ecosystems. These stations are currently deployed in numerous field experiments by researchers in our group and the California Department of Water Resources in the following crops: rice, wine and raisin grape vineyards, alfalfa, almond, walnut, peach, lemon, avocado, and corn.

  3. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  4. PREFACE: Joint IPPP Durham/Cockcroft Institute/ICFA Workshop on Advanced QED methods for Future Accelerators

    Science.gov (United States)

    Bailey, I. R.; Barber, D. P.; Chattopadhyay, S.; Hartin, A.; Heinzl, T.; Hesselbach, S.; Moortgat-Pick, G. A.

    2009-11-01

    The joint IPPP Durham/Cockcroft Institute/ICFA workshop on advanced QED methods for future accelerators took place at the Cockcroft Institute in early March 2009. The motivation for the workshop was the need for a detailed consideration of the physics processes associated with beam-beam effects at the interaction points of future high-energy electron-positron colliders. There is a broad consensus within the particle physics community that the next international facility for experimental high-energy physics research beyond the Large Hadron Collider at CERN should be a high-luminosity electron-positron collider working at the TeV energy scale. One important feature of such a collider will be its ability to deliver polarised beams to the interaction point and to provide accurate measurements of the polarisation state during physics collisions. The physics collisions take place in very dense charge bunches in the presence of extremely strong electromagnetic fields of field strength of order of the Schwinger critical field strength of 4.4×1013 Gauss. These intense fields lead to depolarisation processes which need to be thoroughly understood in order to reduce uncertainty in the polarisation state at collision. To that end, this workshop reviewed the formalisms for describing radiative processes and the methods of calculation in the future strong-field environments. These calculations are based on the Furry picture of organising the interaction term of the Lagrangian. The means of deriving the transition probability of the most important of the beam-beam processes - Beamsstrahlung - was reviewed. The workshop was honoured by the presentations of one of the founders, V N Baier, of the 'Operator method' - one means for performing these calculations. Other theoretical methods of performing calculations in the Furry picture, namely those due to A I Nikishov, V I Ritus et al, were reviewed and intense field quantum processes in fields of different form - namely those

  5. Advanced computational biology methods identify molecular switches for malignancy in an EGF mouse model of liver cancer.

    Directory of Open Access Journals (Sweden)

    Philip Stegmaier

    Full Text Available The molecular causes by which the epidermal growth factor receptor tyrosine kinase induces malignant transformation are largely unknown. To better understand EGFs' transforming capacity whole genome scans were applied to a transgenic mouse model of liver cancer and subjected to advanced methods of computational analysis to construct de novo gene regulatory networks based on a combination of sequence analysis and entrained graph-topological algorithms. Here we identified transcription factors, processes, key nodes and molecules to connect as yet unknown interacting partners at the level of protein-DNA interaction. Many of those could be confirmed by electromobility band shift assay at recognition sites of gene specific promoters and by western blotting of nuclear proteins. A novel cellular regulatory circuitry could therefore be proposed that connects cell cycle regulated genes with components of the EGF signaling pathway. Promoter analysis of differentially expressed genes suggested the majority of regulated transcription factors to display specificity to either the pre-tumor or the tumor state. Subsequent search for signal transduction key nodes upstream of the identified transcription factors and their targets suggested the insulin-like growth factor pathway to render the tumor cells independent of EGF receptor activity. Notably, expression of IGF2 in addition to many components of this pathway was highly upregulated in tumors. Together, we propose a switch in autocrine signaling to foster tumor growth that was initially triggered by EGF and demonstrate the knowledge gain form promoter analysis combined with upstream key node identification.

  6. Numerical Investigation of Cross Flow Phenomena in a Tight-Lattice Rod Bundle Using Advanced Interface Tracking Method

    Science.gov (United States)

    Zhang, Weizhong; Yoshida, Hiroyuki; Ose, Yasuo; Ohnuki, Akira; Akimoto, Hajime; Hotta, Akitoshi; Fujimura, Ken

    In relation to the design of an innovative FLexible-fuel-cycle Water Reactor (FLWR), investigation of thermal-hydraulic performance in tight-lattice rod bundles of the FLWR is being carried out at Japan Atomic Energy Agency (JAEA). The FLWR core adopts a tight triangular lattice arrangement with about 1 mm gap clearance between adjacent fuel rods. In view of importance of accurate prediction of cross flow between subchannels in the evaluation of the boiling transition (BT) in the FLWR core, this study presents a statistical evaluation of numerical simulation results obtained by a detailed two-phase flow simulation code, TPFIT, which employs an advanced interface tracking method. In order to clarify mechanisms of cross flow in such tight lattice rod bundles, the TPFIT is applied to simulate water-steam two-phase flow in two modeled subchannels. Attention is focused on instantaneous fluctuation characteristics of cross flow. With the calculation of correlation coefficients between differential pressure and gas/liquid mixing coefficients, time scales of cross flow are evaluated, and effects of mixing section length, flow pattern and gap spacing on correlation coefficients are investigated. Differences in mechanism between gas and liquid cross flows are pointed out.

  7. Discrete event simulation methods applied to advanced importance measures of repairable components in multistate network flow systems

    International Nuclear Information System (INIS)

    Huseby, Arne B.; Natvig, Bent

    2013-01-01

    Discrete event models are frequently used in simulation studies to model and analyze pure jump processes. A discrete event model can be viewed as a system consisting of a collection of stochastic processes, where the states of the individual processes change as results of various kinds of events occurring at random points of time. We always assume that each event only affects one of the processes. Between these events the states of the processes are considered to be constant. In the present paper we use discrete event simulation in order to analyze a multistate network flow system of repairable components. In order to study how the different components contribute to the system, it is necessary to describe the often complicated interaction between component processes and processes at the system level. While analytical considerations may throw some light on this, a simulation study often allows the analyst to explore more details. By producing stable curve estimates for the development of the various processes, one gets a much better insight in how such systems develop over time. These methods are particulary useful in the study of advanced importancez measures of repairable components. Such measures can be very complicated, and thus impossible to calculate analytically. By using discrete event simulations, however, this can be done in a very natural and intuitive way. In particular significant differences between the Barlow–Proschan measure and the Natvig measure in multistate network flow systems can be explored

  8. An overview of advanced reduction processes for bromate removal from drinking water: Reducing agents, activation methods, applications and mechanisms.

    Science.gov (United States)

    Xiao, Qian; Yu, Shuili; Li, Lei; Wang, Ting; Liao, Xinlei; Ye, Yubing

    2017-02-15

    Bromate (BrO 3 - ) is a possible human carcinogen regulated at a strict standard of 10μg/L in drinking water. Various techniques to eliminate BrO 3 - usually fall into three main categories: reducing bromide (Br - ) prior to formation of BrO 3 - , minimizing BrO 3 - formation during the ozonation process, and removing BrO 3 - from post-ozonation waters. However, the first two approaches exhibit low degradation efficiency and high treatment cost. The third workaround has obvious advantages, such as high reduction efficiency, more stable performance and easier combination with UV disinfection, and has therefore been widely implemented in water treatment. Recently, advanced reduction processes (ARPs), the photocatalysis of BrO 3 - , have attracted much attention due to improved performance. To increase the feasibility of photocatalytic systems, the focus of this work concerns new technological developments, followed by a summary of reducing agents, activation methods, operational parameters, and applications. The reaction mechanisms of two typical processes involving UV/sulfite homogeneous photocatalysis and UV/titanium dioxide heterogeneous photocatalysis are further summarized. The future research needs for ARPs to reach full-scale potential in drinking water treatment are suggested accordingly. Copyright © 2016. Published by Elsevier B.V.

  9. Investigating Microbe-Mineral Interactions: Recent Advances in X-Ray and Electron Microscopy and Redox-Sensitive Methods

    Science.gov (United States)

    Miot, Jennyfer; Benzerara, Karim; Kappler, Andreas

    2014-05-01

    Microbe-mineral interactions occur in diverse modern environments, from the deep sea and subsurface rocks to soils and surface aquatic environments. They may have played a central role in the geochemical cycling of major (e.g., C, Fe, Ca, Mn, S, P) and trace (e.g., Ni, Mo, As, Cr) elements over Earth's history. Such interactions include electron transfer at the microbe-mineral interface that left traces in the rock record. Geomicrobiology consists in studying interactions at these organic-mineral interfaces in modern samples and looking for traces of past microbe-mineral interactions recorded in ancient rocks. Specific tools are required to probe these interfaces and to understand the mechanisms of interaction between microbes and minerals from the scale of the biofilm to the nanometer scale. In this review, we focus on recent advances in electron microscopy, in particular in cryoelectron microscopy, and on a panel of electrochemical and synchrotron-based methods that have recently provided new understanding and imaging of the microbe-mineral interface, ultimately opening new fields to be explored.

  10. The finite volume method in computational fluid dynamics an advanced introduction with OpenFOAM and Matlab

    CERN Document Server

    Moukalled, F; Darwish, M

    2016-01-01

    This textbook explores both the theoretical foundation of the Finite Volume Method (FVM) and its applications in Computational Fluid Dynamics (CFD). Readers will discover a thorough explanation of the FVM numerics and algorithms used for the simulation of incompressible and compressible fluid flows, along with a detailed examination of the components needed for the development of a collocated unstructured pressure-based CFD solver. Two particular CFD codes are explored. The first is uFVM, a three-dimensional unstructured pressure-based finite volume academic CFD code, implemented within Matlab. The second is OpenFOAM®, an open source framework used in the development of a range of CFD programs for the simulation of industrial scale flow problems. With over 220 figures, numerous examples and more than one hundred exercise on FVM numerics, programming, and applications, this textbook is suitable for use in an introductory course on the FVM, in an advanced course on numerics, and as a reference for CFD programm...

  11. Assessing health impacts in complex eco-epidemiological settings in the humid tropics: Advancing tools and methods

    International Nuclear Information System (INIS)

    Winkler, Mirko S.; Divall, Mark J.; Krieger, Gary R.; Balge, Marci Z.; Singer, Burton H.; Utzinger, Juerg

    2010-01-01

    In the developing world, large-scale projects in the extractive industry and natural resources sectors are often controversial and associated with long-term adverse health consequences to local communities. In many industrialised countries, health impact assessment (HIA) has been institutionalized for the mitigation of anticipated negative health effects while enhancing the benefits of projects, programmes and policies. However, in developing country settings, relatively few HIAs have been performed. Hence, more HIAs with a focus on low- and middle-income countries are needed to advance and refine tools and methods for impact assessment and subsequent mitigation measures. We present a promising HIA approach, developed within the frame of a large gold-mining project in the Democratic Republic of the Congo. The articulation of environmental health areas, the spatial delineation of potentially affected communities and the use of a diversity of sources to obtain quality baseline health data are utilized for risk profiling. We demonstrate how these tools and data are fed into a risk analysis matrix, which facilitates ranking of potential health impacts for subsequent prioritization of mitigation strategies. The outcomes encapsulate a multitude of environmental and health determinants in a systematic manner, and will assist decision-makers in the development of mitigation measures that minimize potential adverse health effects and enhance positive ones.

  12. Exploring the relationship between the engineering and physical sciences and the health and life sciences by advanced bibliometric methods.

    Directory of Open Access Journals (Sweden)

    Ludo Waltman

    Full Text Available We investigate the extent to which advances in the health and life sciences (HLS are dependent on research in the engineering and physical sciences (EPS, particularly physics, chemistry, mathematics, and engineering. The analysis combines two different bibliometric approaches. The first approach to analyze the 'EPS-HLS interface' is based on term map visualizations of HLS research fields. We consider 16 clinical fields and five life science fields. On the basis of expert judgment, EPS research in these fields is studied by identifying EPS-related terms in the term maps. In the second approach, a large-scale citation-based network analysis is applied to publications from all fields of science. We work with about 22,000 clusters of publications, each representing a topic in the scientific literature. Citation relations are used to identify topics at the EPS-HLS interface. The two approaches complement each other. The advantages of working with textual data compensate for the limitations of working with citation relations and the other way around. An important advantage of working with textual data is in the in-depth qualitative insights it provides. Working with citation relations, on the other hand, yields many relevant quantitative statistics. We find that EPS research contributes to HLS developments mainly in the following five ways: new materials and their properties; chemical methods for analysis and molecular synthesis; imaging of parts of the body as well as of biomaterial surfaces; medical engineering mainly related to imaging, radiation therapy, signal processing technology, and other medical instrumentation; mathematical and statistical methods for data analysis. In our analysis, about 10% of all EPS and HLS publications are classified as being at the EPS-HLS interface. This percentage has remained more or less constant during the past decade.

  13. Field experience with advanced methods of on-line monitoring of water chemistry and corrosion degradation in nuclear power stations

    International Nuclear Information System (INIS)

    Stellwag, B.; Aaltonen, P.; Hickling, J.

    1997-01-01

    Advanced methods for on-line, in-situ water chemistry and corrosion monitoring in nuclear power stations have been developed during the past decade. The terms ''on-line'' and ''in-situ'' characterize approaches involving continuous measurement of relevant parameters in high temperature water, preferably directly in the systems and components and not in removed samples at room temperature. This paper describes the field experience to-date with such methods in terms of three examples: (1) On-line chemistry monitoring of the primary coolant during shutdown of a Type WWER-440 PWR. (2) Redox and corrosion potential measurements in final feedwater preheaters and steam generators of two large KWU PWRs over several cycles of plant operation. (3) Real-time, in-situ corrosion surveillance inside the calundia vault of a CANDU reactor. The way in which water chemistry sensors and corrosion monitoring sensors complement each other is outlined: on-line, in-situ measurement of pH, conductivity and redox potential gives information about the possible corrosivity of the environment. Electrochemical noise techniques display signals of corrosion activity under the actual environmental conditions. A common experience gained from separate use of these different types of sensors has been that new and additional information about plants and their actual process conditions is obtained. Moreover, they reveal the intimate relationship between the operational situation and its consequences for the quality of the working fluid and the corrosion behaviour of the plant materials. On this basis, the efficiency of the existing chemistry sampling and control system can be checked and corrosion degradation can be minimized. Furthermore, activity buildup in the primary circuit can be studied. Further significant advantages can be expected from an integration of these various types of sensors into a common water chemistry and corrosion surveillance system. For confirmation, a complete set of sensors

  14. Advances in Alkenone Paleotemperature Proxies: Analytical Methods, Novel Structures and Haptophyte Species, Biosynthesis, New indices and Ecological Aspects

    Science.gov (United States)

    Huang, Y.; Longo, W. M.; Zheng, Y.; Richter, N.; Dillon, J. T.; Theroux, S.; D'Andrea, W. J.; Toney, J. L.; Wang, L.; Amaral-Zettler, L. A.

    2017-12-01

    Alkenones are mature, well-established paleo-sea surface temperature proxies that have been widely applied for more than three decades. However, recent advances across a broad range of alkenone-related topics at Brown University are inviting new paleoclimate and paleo-environmental applications for these classic biomarkers. In this presentation, I will summarize our progress in the following areas: (1) Discovery of a freshwater alkenone-producing haptophyte species and structural elucidation of novel alkenone structures unique to the species, performing in-situ temperature calibrations, and classifying alkenone-producing haptophytes into three groups based on molecular ecological approaches (with the new species belonging to Group I Isochrysidales); (2) A global survey of Group I haptophyte distributions and environmental conditions favoring the presence of this alga, as well as examples of using Group I alkenones for paleotemperature reconstructions; (3) New gas chromatographic columns that allow unprecedented resolution of alkenones and alkenoates and associated structural isomers, and development of a new suite of paleotemperature and paleoenvironmental proxies; (4) A new liquid chromatographic separation technique that allows efficient cleanup of alkenones and alkenoates (without the need for saponification) for subsequent coelution-free gas chromatographic analysis; (5) Novel structural features revealed by new analytical methods that now allow a comprehensive re-assessment of taxonomic features of various haptophyte species, with principal component analysis capable of fully resolving species biomarker distributions; (6) Development of UK37 double prime (UK37'') for Group II haptophytes (e.g., those occurring in saline lakes and estuaries), that differs from the traditional unsaturation indices used for SST reconstructions; (7) New assessment of how mixed inputs from different alkenone groups may affect SST reconstructions in marginal ocean environments and

  15. Predictive Method for Correct Identification of Archaeological Charred Grape Seeds: Support for Advances in Knowledge of Grape Domestication Process

    Science.gov (United States)

    Ucchesu, Mariano; Orrù, Martino; Grillo, Oscar; Venora, Gianfranco; Paglietti, Giacomo; Ardu, Andrea; Bacchetta, Gianluigi

    2016-01-01

    The identification of archaeological charred grape seeds is a difficult task due to the alteration of the morphological seeds shape. In archaeobotanical studies, for the correct discrimination between Vitis vinifera subsp. sylvestris and Vitis vinifera subsp. vinifera grape seeds it is very important to understand the history and origin of the domesticated grapevine. In this work, different carbonisation experiments were carried out using a hearth to reproduce the same burning conditions that occurred in archaeological contexts. In addition, several carbonisation trials on modern wild and cultivated grape seeds were performed using a muffle furnace. For comparison with archaeological materials, modern grape seed samples were obtained using seven different temperatures of carbonisation ranging between 180 and 340ºC for 120 min. Analysing the grape seed size and shape by computer vision techniques, and applying the stepwise linear discriminant analysis (LDA) method, discrimination of the wild from the cultivated charred grape seeds was possible. An overall correct classification of 93.3% was achieved. Applying the same statistical procedure to compare modern charred with archaeological grape seeds, found in Sardinia and dating back to the Early Bronze Age (2017–1751 2σ cal. BC), allowed 75.0% of the cases to be identified as wild grape. The proposed method proved to be a useful and effective procedure in identifying, with high accuracy, the charred grape seeds found in archaeological sites. Moreover, it may be considered valid support for advances in the knowledge and comprehension of viticulture adoption and the grape domestication process. The same methodology may also be successful when applied to other plant remains, and provide important information about the history of domesticated plants. PMID:26901361

  16. Assessment of Crack Detection in Heavy-Walled Cast Stainless Steel Piping Welds Using Advanced Low-Frequency Ultrasonic Methods

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Michael T.; Crawford, Susan L.; Cumblidge, Stephen E.; Denslow, Kayte M.; Diaz, Aaron A.; Doctor, Steven R.

    2007-03-01

    Studies conducted at the Pacific Northwest National Laboratory in Richland, Washington, have focused on assessing the effectiveness and reliability of novel approaches to nondestructive examination (NDE) for inspecting coarse-grained, cast stainless steel reactor components. The primary objective of this work is to provide information to the U.S. Nuclear Regulatory Commission on the effectiveness and reliability of advanced NDE methods as related to the inservice inspection of safety-related components in pressurized water reactors (PWRs). This report provides progress, recent developments, and results from an assessment of low frequency ultrasonic testing (UT) for detection of inside surface-breaking cracks in cast stainless steel reactor piping weldments as applied from the outside surface of the components. Vintage centrifugally cast stainless steel piping segments were examined to assess the capability of low-frequency UT to adequately penetrate challenging microstructures and determine acoustic propagation limitations or conditions that may interfere with reliable flaw detection. In addition, welded specimens containing mechanical and thermal fatigue cracks were examined. The specimens were fabricated using vintage centrifugally cast and statically cast stainless steel materials, which are typical of configurations installed in PWR primary coolant circuits. Ultrasonic studies on the vintage centrifugally cast stainless steel piping segments were conducted with a 400-kHz synthetic aperture focusing technique and phased array technology applied at 500 kHz, 750 kHz, and 1.0 MHz. Flaw detection and characterization on the welded specimens was performed with the phased array method operating at the frequencies stated above. This report documents the methodologies used and provides results from laboratory studies to assess baseline material noise, crack detection, and length-sizing capability for low-frequency UT in cast stainless steel piping.

  17. Field experience with advanced methods of on-line monitoring of water chemistry and corrosion degradation in nuclear power stations

    Energy Technology Data Exchange (ETDEWEB)

    Stellwag, B [Siemens AG Unternehmensbereich KWU, Erlangen (Germany); Aaltonen, P [Technical Research Centre of Finland, Espoo (Finland); Hickling, J [CML GmbH, Erlangen (Germany)

    1997-02-01

    Advanced methods for on-line, in-situ water chemistry and corrosion monitoring in nuclear power stations have been developed during the past decade. The terms ``on-line`` and ``in-situ`` characterize approaches involving continuous measurement of relevant parameters in high temperature water, preferably directly in the systems and components and not in removed samples at room temperature. This paper describes the field experience to-date with such methods in terms of three examples: (1) On-line chemistry monitoring of the primary coolant during shutdown of a Type WWER-440 PWR. (2) Redox and corrosion potential measurements in final feedwater preheaters and steam generators of two large KWU PWRs over several cycles of plant operation. (3) Real-time, in-situ corrosion surveillance inside the calundia vault of a CANDU reactor. The way in which water chemistry sensors and corrosion monitoring sensors complement each other is outlined: on-line, in-situ measurement of pH, conductivity and redox potential gives information about the possible corrosivity of the environment. Electrochemical noise techniques display signals of corrosion activity under the actual environmental conditions. A common experience gained from separate use of these different types of sensors has been that new and additional information about plants and their actual process conditions is obtained. Moreover, they reveal the intimate relationship between the operational situation and its consequences for the quality of the working fluid and the corrosion behaviour of the plant materials. On this basis, the efficiency of the existing chemistry sampling and control system can be checked and corrosion degradation can be minimized. Furthermore, activity buildup in the primary circuit can be studied. Further significant advantages can be expected from an integration of these various types of sensors into a common water chemistry and corrosion surveillance system. (Abstract Truncated)

  18. Exploring the relationship between the engineering and physical sciences and the health and life sciences by advanced bibliometric methods.

    Science.gov (United States)

    Waltman, Ludo; van Raan, Anthony F J; Smart, Sue

    2014-01-01

    We investigate the extent to which advances in the health and life sciences (HLS) are dependent on research in the engineering and physical sciences (EPS), particularly physics, chemistry, mathematics, and engineering. The analysis combines two different bibliometric approaches. The first approach to analyze the 'EPS-HLS interface' is based on term map visualizations of HLS research fields. We consider 16 clinical fields and five life science fields. On the basis of expert judgment, EPS research in these fields is studied by identifying EPS-related terms in the term maps. In the second approach, a large-scale citation-based network analysis is applied to publications from all fields of science. We work with about 22,000 clusters of publications, each representing a topic in the scientific literature. Citation relations are used to identify topics at the EPS-HLS interface. The two approaches complement each other. The advantages of working with textual data compensate for the limitations of working with citation relations and the other way around. An important advantage of working with textual data is in the in-depth qualitative insights it provides. Working with citation relations, on the other hand, yields many relevant quantitative statistics. We find that EPS research contributes to HLS developments mainly in the following five ways: new materials and their properties; chemical methods for analysis and molecular synthesis; imaging of parts of the body as well as of biomaterial surfaces; medical engineering mainly related to imaging, radiation therapy, signal processing technology, and other medical instrumentation; mathematical and statistical methods for data analysis. In our analysis, about 10% of all EPS and HLS publications are classified as being at the EPS-HLS interface. This percentage has remained more or less constant during the past decade.

  19. Contribution of developing advanced engineering methods in interdisciplinary studying the piston rings from 1.6 spark ignited Ford engine at Technical University of Cluj-Napoca

    Science.gov (United States)

    -Aurel Cherecheş, Ioan; -Ioana Borzan, Adela; -Laurean Băldean, Doru

    2017-10-01

    Study of construction and wearing process in the case of piston-rings and other significant components from internal combustion engines leads at any time to creative and useful optimizing ideas, both in designing and manufacturing phases. Main objective of the present paper is to realize an interdisciplinary research using advanced methods in piston-rings evaluation of a common vehicle on the streets which is Ford Focus FYDD. Specific objectives are a theoretical study of the idea for advanced analysis method in piston-rings evaluation and an applied research developed in at Technical University from Cluj-Napoca with the motor vehicle caught in the repairing process.

  20. Cost-Benefit Analysis for the Advanced Near Net Shape Technology (ANNST) Method for Fabricating Stiffened Cylinders

    Science.gov (United States)

    Ivanco, Marie L.; Domack, Marcia S.; Stoner, Mary Cecilia; Hehir, Austin R.

    2016-01-01

    Low Technology Readiness Levels (TRLs) and high levels of uncertainty make it challenging to develop cost estimates of new technologies in the R&D phase. It is however essential for NASA to understand the costs and benefits associated with novel concepts, in order to prioritize research investments and evaluate the potential for technology transfer and commercialization. This paper proposes a framework to perform a cost-benefit analysis of a technology in the R&D phase. This framework was developed and used to assess the Advanced Near Net Shape Technology (ANNST) manufacturing process for fabricating integrally stiffened cylinders. The ANNST method was compared with the conventional multi-piece metallic construction and composite processes for fabricating integrally stiffened cylinders. Following the definition of a case study for a cryogenic tank cylinder of specified geometry, data was gathered through interviews with Subject Matter Experts (SMEs), with particular focus placed on production costs and process complexity. This data served as the basis to produce process flowcharts and timelines, mass estimates, and rough order-of-magnitude cost and schedule estimates. The scalability of the results was subsequently investigated to understand the variability of the results based on tank size. Lastly, once costs and benefits were identified, the Analytic Hierarchy Process (AHP) was used to assess the relative value of these achieved benefits for potential stakeholders. These preliminary, rough order-of-magnitude results predict a 46 to 58 percent reduction in production costs and a 7-percent reduction in weight over the conventional metallic manufacturing technique used in this study for comparison. Compared to the composite manufacturing technique, these results predict cost savings of 35 to 58 percent; however, the ANNST concept was heavier. In this study, the predicted return on investment of equipment required for the ANNST method was ten cryogenic tank barrels

  1. Interactive 3D imaging technologies: application in advanced methods of jaw bone reconstruction using stem cells/pre-osteoblasts in oral surgery.

    Science.gov (United States)

    Wojtowicz, Andrzej; Jodko, Monika; Perek, Jan; Popowski, Wojciech

    2014-09-01

    Cone beam computed tomography has created a specific revolution in maxillofacial imaging, facilitating the transition of diagnosis from 2D to 3D, and expanded the role of imaging from diagnosis to the possibility of actual planning. There are many varieties of cone beam computed tomography-related software available, from basic DICOM viewers to very advanced planning modules, such as InVivo Anatomage, and SimPlant (Materialise Dental). Through the use of these programs scans can be processed into a three-dimensional high-quality simulation which enables planning of the overall treatment. In this article methods of visualization are demonstrated and compared, in the example of 2 cases of reconstruction of advanced jaw bone defects using tissue engineering. Advanced imaging methods allow one to plan a miniinvasive treatment, including assessment of the bone defect's shape and localization, planning a surgical approach and individual graft preparation.

  2. Grid-Free LES 3D Vortex Method for the Simulation of Tubulent Flows Over Advanced Lifting Surfaces, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Turbulent flows associated with advanced aerodynamic designs represent a considerable challenge for accurate prediction. For example, the flow past low-speed wings...

  3. ADvanced IMage Algebra (ADIMA): a novel method for depicting multiple sclerosis lesion heterogeneity, as demonstrated by quantitative MRI.

    Science.gov (United States)

    Yiannakas, Marios C; Tozer, Daniel J; Schmierer, Klaus; Chard, Declan T; Anderson, Valerie M; Altmann, Daniel R; Miller, David H; Wheeler-Kingshott, Claudia A M

    2013-05-01

    There are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans. To determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images. We obtained conventional PDw and T2w images from 10 patients with relapsing-remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML. Our study's ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively. ADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished.

  4. Reactor physics methods, models, and applications used to support the conceptual design of the Advanced Neutron Source

    International Nuclear Information System (INIS)

    Gehin, J.C.; Worley, B.A.; Renier, J.P.; Wemple, C.A.; Jahshan, S.N.; Ryskammp, J.M.

    1995-08-01

    This report summarizes the neutronics analysis performed during 1991 and 1992 in support of characterization of the conceptual design of the Advanced Neutron Source (ANS). The methods used in the analysis, parametric studies, and key results supporting the design and safety evaluations of the conceptual design are presented. The analysis approach used during the conceptual design phase followed the same approach used in early ANS evaluations: (1) a strong reliance on Monte Carlo theory for beginning-of-cycle reactor performance calculations and (2) a reliance on few-group diffusion theory for reactor fuel cycle analysis and for evaluation of reactor performance at specific time steps over the fuel cycle. The Monte Carlo analysis was carried out using the MCNP continuous-energy code, and the few- group diffusion theory calculations were performed using the VENTURE and PDQ code systems. The MCNP code was used primarily for its capability to model the reflector components in realistic geometries as well as the inherent circumvention of cross-section processing requirements and use of energy-collapsed cross sections. The MCNP code was used for evaluations of reflector component reactivity effects and of heat loads in these components. The code was also used as a benchmark comparison against the diffusion-theory estimates of key reactor parameters such as region fluxes, control rod worths, reactivity coefficients, and material worths. The VENTURE and PDQ codes were used to provide independent evaluations of burnup effects, power distributions, and small perturbation worths. The performance and safety calculations performed over the subject time period are summarized, and key results are provided. The key results include flux and power distributions over the fuel cycle, silicon production rates, fuel burnup rates, component reactivities, control rod worths, component heat loads, shutdown reactivity margins, reactivity coefficients, and isotope production rates

  5. Enhancing resolution and contrast in second-harmonic generation microscopy using an advanced maximum likelihood estimation restoration method

    Science.gov (United States)

    Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.

    2017-02-01

    Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.

  6. A comparative study of two digestion methods employed for the determination boron in ferroboron used as an advanced shielding material

    International Nuclear Information System (INIS)

    Kamble, Granthali S.; Manisha, V.; Venkatesh, K.

    2015-01-01

    Shielding of nuclear reactor core is an important requirement of fast reactors. An important objective of future Fast Breeder Reactors (FBRs) is to reduce the volume of shields. A large number of materials have been considered for use to reduce the neutron flux to acceptable levels. A shield material which brings down the energy of neutrons by elastic and inelastic scattering along with absorption will be more effective. Ferro boron is identified as one of the advanced shielding materials considered for use in future FBRs, planned to be constructed in India. Ferroboron is an economical and indigenously available material which qualifies as a promising shield material through literature survey and scoping calculations. Experiments have been conducted in KAMINI reactor to understand the effectiveness of prospective shield material Ferro-boron as an in-core shield material for future FBRs. The Ferro boron used in these experiments contained 11.8% and 15% of boron. Precise determination of boron content in these ferro boron samples is very important to determine its effectiveness as a shield material. In this work a comparative study was carried out to determine the boron content in ferro boron samples. In the first method the sample was treated with incremental amounts of nitric acid under reflux (to prevent rigorous reaction and volatalisation of boron). The solution was gradually heated and the solution was filtered through a Whatman Filter paper no. 41. The undissolved ferro boron residue collected in the filter paper after filtration, is transferred to a platinum crucible; mixed with sodium carbonate and is ashed. The crucible is placed over a burner for 1 h to fuse the contents. The fused mass is leached in dilute hydrochloric acid, added to the nitric acid filtrate and made up to pre-determined volume

  7. Advanced Instrumentation and Control Methods for Small and Medium Reactors with IRIS Demonstration. Final Report. Volume 1

    International Nuclear Information System (INIS)

    Hines, J. Wesley; Upadhyaya, Belle R.; Doster, J. Michael; Edwards, Robert M.; Lewis, Kenneth D.; Turinsky, Paul; Coble, Jamie

    2011-01-01

    Development and deployment of small-scale nuclear power reactors and their maintenance, monitoring, and control are part of the mission under the Small Modular Reactor (SMR) program. The objectives of this NERI-consortium research project are to investigate, develop, and validate advanced methods for sensing, controlling, monitoring, diagnosis, and prognosis of these reactors, and to demonstrate the methods with application to one of the proposed integral pressurized water reactors (IPWR). For this project, the IPWR design by Westinghouse, the International Reactor Secure and Innovative (IRIS), has been used to demonstrate the techniques developed under this project. The research focuses on three topical areas with the following objectives. Objective 1 - Develop and apply simulation capabilities and sensitivity/uncertainty analysis methods to address sensor deployment analysis and small grid stability issues. Objective 2 - Develop and test an autonomous and fault-tolerant control architecture and apply to the IRIS system and an experimental flow control loop, with extensions to multiple reactor modules, nuclear desalination, and optimal sensor placement strategy. Objective 3 - Develop and test an integrated monitoring, diagnosis, and prognosis system for SMRs using the IRIS as a test platform, and integrate process and equipment monitoring (PEM) and process and equipment prognostics (PEP) toolboxes. The research tasks are focused on meeting the unique needs of reactors that may be deployed to remote locations or to developing countries with limited support infrastructure. These applications will require smaller, robust reactor designs with advanced technologies for sensors, instrumentation, and control. An excellent overview of SMRs is described in an article by Ingersoll (2009). The article refers to these as deliberately small reactors. Most of these have modular characteristics, with multiple units deployed at the same plant site. Additionally, the topics focus

  8. Motivations, aims and communication around advance directives A mixed-methods study into the perspective of their owners and the influence of a current illness

    NARCIS (Netherlands)

    van Wijmen, M.P.S.; Pasman, H.R.W.; Widdershoven, G.A.M.; Onwuteaka-Philipsen, B.D.

    2014-01-01

    Objective: What are motivations of owners of an advance directive (AD) to draft an AD, what do they aim for with their AD and do they communicate about their AD? Methods: Written questionnaires were sent to a cohort of people owning different types of ADs (n= 5768). A purposive sample of people

  9. "It's the Method, Stupid." Interrelations between Methodological and Theoretical Advances: The Example of Comparing Higher Education Systems Internationally

    Science.gov (United States)

    Hoelscher, Michael

    2017-01-01

    This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…

  10. Comparison of best estimate methods for judging design margins of advanced water-cooled reactors. Proceedings of a IAEA technical committee meeting. Working material

    International Nuclear Information System (INIS)

    1994-01-01

    The objectives of the Technical Committee Meeting on Significance of design and Operational Margins for advanced Water Cooled Reactor Systems were: to provide an international forum for presentation and discussion of recent results on best estimate methods for judging design margins of mentioned reactors; to identify and describe the technical features of best estimate methods for predicting margins and to provide input for a status report on a comparison of best estimate methods for assessing margins in different countries and organisations. Participants from thirteen countries presented fifteen papers describing their methods, state of art and experiences. Each of those is presented here by a separate abstract

  11. Comparison of best estimate methods for judging design margins of advanced water-cooled reactors. Proceedings of a IAEA technical committee meeting. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    The objectives of the Technical Committee Meeting on Significance of design and Operational Margins for advanced Water Cooled Reactor Systems were: to provide an international forum for presentation and discussion of recent results on best estimate methods for judging design margins of mentioned reactors; to identify and describe the technical features of best estimate methods for predicting margins and to provide input for a status report on a comparison of best estimate methods for assessing margins in different countries and organisations. Participants from thirteen countries presented fifteen papers describing their methods, state of art and experiences. Each of those is presented here by a separate abstract Refs, figs, tabs

  12. Exploring the relationship between the engineering and physical sciences and the health and life sciences by advanced bibliometric methods

    NARCIS (Netherlands)

    Waltman, L.R.; Van, Raan A.F.J.; Smart, S.

    2014-01-01

    We investigate the extent to which advances in the health and life sciences (HLS) are dependent on research in the engineering and physical sciences (EPS), particularly physics, chemistry, mathematics, and engineering. The analysis combines two different bibliometric approaches. The first approach

  13. Effect of tillage on water advance and distribution under surge and continuous furrow irrigation methods for cotton in Egypt

    NARCIS (Netherlands)

    Ismail, S.M.

    2006-01-01

    A field experiment was carried out to assess the effect of tillage on water advance and water distribution in the root zone area (0.5 m) under continuous and surge flow irrigation in a cotton field. The experiment was conducted at the Agriculture Experimental Station, Assiut University, Assiut,

  14. Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales

    Energy Technology Data Exchange (ETDEWEB)

    Kollias, Pavlos [McGill Univ., Montreal, QC (Canada

    2016-09-06

    This the final report for the DE-SC0007096 - Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales - PI: Pavlos Kollias. The final report outline the main findings of the research conducted using the aforementioned award in the area of cloud research from the cloud scale (10-100 m) to the mesoscale (20-50 km).

  15. Advanced computational methods for the assessment of reactor core behaviour during reactivity initiated accidents. Final report; Fortschrittliche Rechenmethoden zum Kernverhalten bei Reaktivitaetsstoerfaellen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Pautz, A.; Perin, Y.; Pasichnyk, I.; Velkov, K.; Zwermann, W.; Seubert, A.; Klein, M.; Gallner, L.; Krzycacz-Hausmann, B.

    2012-05-15

    The document at hand serves as the final report for the reactor safety research project RS1183 ''Advanced Computational Methods for the Assessment of Reactor Core Behavior During Reactivity-Initiated Accidents''. The work performed in the framework of this project was dedicated to the development, validation and application of advanced computational methods for the simulation of transients and accidents of nuclear installations. These simulation tools describe in particular the behavior of the reactor core (with respect to neutronics, thermal-hydraulics and thermal mechanics) at a very high level of detail. The overall goal of this project was the deployment of a modern nuclear computational chain which provides, besides advanced 3D tools for coupled neutronics/ thermal-hydraulics full core calculations, also appropriate tools for the generation of multi-group cross sections and Monte Carlo models for the verification of the individual calculational steps. This computational chain shall primarily be deployed for light water reactors (LWR), but should beyond that also be applicable for innovative reactor concepts. Thus, validation on computational benchmarks and critical experiments was of paramount importance. Finally, appropriate methods for uncertainty and sensitivity analysis were to be integrated into the computational framework, in order to assess and quantify the uncertainties due to insufficient knowledge of data, as well as due to methodological aspects.

  16. An advanced revised universal slope method for low cycle fatigue evaluation of elbow piping subjected to in-plane cyclic bending displacement

    International Nuclear Information System (INIS)

    Urabe, Yoshio

    2015-01-01

    In order to rationalize the low cycle fatigue evaluation of elbow piping subjected to in-plane cyclic bending displacement, an advanced revised universal slope method is proposed. In the proposed method, the coefficient of the first term of the fatigue life equation which resembles Manson's equation is expressed by parameters of the multi-axial degree, the tensile strength and the fracture strength. Also, the coefficient of the second term is expressed by the multi-axial degree, the fracture ductility and the minimum fracture ductility under the maximum multi-axial degree. Here equivalent strain range is used for the fatigue life estimation. The previously carried out pipe elbow test data were reanalyzed using the proposed method. As the result, the experimentally obtained fatigue lives had considerably good coincidences with the predicted fatigue lives by the proposed method. Application of the proposed method is also discussed. (author)

  17. A Takagi-Sugeno fuzzy power-distribution method for a prototypical advanced reactor considering pump degradation

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Yue [Institute of Nuclear and New Energy Technology, Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing (China); Coble, Jamie [Dept. of Nuclear Engineering, University of Tennessee, Knoxville (United States)

    2017-08-15

    Advanced reactor designs often feature longer operating cycles between refueling and new concepts of operation beyond traditional baseload electricity production. Owing to this increased complexity, traditional proportional–integral control may not be sufficient across all potential operating regimes. The prototypical advanced reactor (PAR) design features two independent reactor modules, each connected to a single dedicated steam generator that feeds a common balance of plant for electricity generation and process heat applications. In the current research, the PAR is expected to operate in a load-following manner to produce electricity to meet grid demand over a 24-hour period. Over the operational lifetime of the PAR system, primary and intermediate sodium pumps are expected to degrade in performance. The independent operation of the two reactor modules in the PAR may allow the system to continue operating under degraded pump performance by shifting the power production between reactor modules in order to meet overall load demands. This paper proposes a Takagi–Sugeno (T–S) fuzzy logic-based power distribution system. Two T–S fuzzy power distribution controllers have been designed and tested. Simulation shows that the devised T–S fuzzy controllers provide improved performance over traditional controls during daily load-following operation under different levels of pump degradation.

  18. A Takagi–Sugeno fuzzy power-distribution method for a prototypical advanced reactor considering pump degradation

    Directory of Open Access Journals (Sweden)

    Yue Yuan

    2017-08-01

    Full Text Available Advanced reactor designs often feature longer operating cycles between refueling and new concepts of operation beyond traditional baseload electricity production. Owing to this increased complexity, traditional proportional–integral control may not be sufficient across all potential operating regimes. The prototypical advanced reactor (PAR design features two independent reactor modules, each connected to a single dedicated steam generator that feeds a common balance of plant for electricity generation and process heat applications. In the current research, the PAR is expected to operate in a load-following manner to produce electricity to meet grid demand over a 24-hour period. Over the operational lifetime of the PAR system, primary and intermediate sodium pumps are expected to degrade in performance. The independent operation of the two reactor modules in the PAR may allow the system to continue operating under degraded pump performance by shifting the power production between reactor modules in order to meet overall load demands. This paper proposes a Takagi–Sugeno (T–S fuzzy logic-based power distribution system. Two T–S fuzzy power distribution controllers have been designed and tested. Simulation shows that the devised T–S fuzzy controllers provide improved performance over traditional controls during daily load-following operation under different levels of pump degradation.

  19. Advancing methods for research on household water insecurity: Studying entitlements and capabilities, socio-cultural dynamics, and political processes, institutions and governance.

    Science.gov (United States)

    Wutich, Amber; Budds, Jessica; Eichelberger, Laura; Geere, Jo; Harris, Leila; Horney, Jennifer; Jepson, Wendy; Norman, Emma; O'Reilly, Kathleen; Pearson, Amber; Shah, Sameer; Shinn, Jamie; Simpson, Karen; Staddon, Chad; Stoler, Justin; Teodoro, Manuel P; Young, Sera

    2017-11-01

    Household water insecurity has serious implications for the health, livelihoods and wellbeing of people around the world. Existing methods to assess the state of household water insecurity focus largely on water quality, quantity or adequacy, source or reliability, and affordability. These methods have significant advantages in terms of their simplicity and comparability, but are widely recognized to oversimplify and underestimate the global burden of household water insecurity. In contrast, a broader definition of household water insecurity should include entitlements and human capabilities, sociocultural dynamics, and political institutions and processes. This paper proposes a mix of qualitative and quantitative methods that can be widely adopted across cultural, geographic, and demographic contexts to assess hard-to-measure dimensions of household water insecurity. In doing so, it critically evaluates existing methods for assessing household water insecurity and suggests ways in which methodological innovations advance a broader definition of household water insecurity.

  20. A Fast Numerical Method for the Calculation of the Equilibrium Isotopic Composition of a Transmutation System in an Advanced Fuel Cycle

    Directory of Open Access Journals (Sweden)

    F. Álvarez-Velarde

    2012-01-01

    Full Text Available A fast numerical method for the calculation in a zero-dimensional approach of the equilibrium isotopic composition of an iteratively used transmutation system in an advanced fuel cycle, based on the Banach fixed point theorem, is described in this paper. The method divides the fuel cycle in successive stages: fuel fabrication, storage, irradiation inside the transmutation system, cooling, reprocessing, and incorporation of the external material into the new fresh fuel. The change of the fuel isotopic composition, represented by an isotope vector, is described in a matrix formulation. The resulting matrix equations are solved using direct methods with arbitrary precision arithmetic. The method has been successfully applied to a double-strata fuel cycle with light water reactors and accelerator-driven subcritical systems. After comparison to the results of the EVOLCODE 2.0 burn-up code, the observed differences are about a few percents in the mass estimations of the main actinides.

  1. Experimental tests and qualification of analytical methods to address thermohydraulic phenomena in advanced water cooled reactors. Proceedings of a technical committee meeting

    International Nuclear Information System (INIS)

    2000-05-01

    Worldwide there is considerable experience in nuclear power technology, especially in water cooled reactor technology. Of the operating plants, in September 1998, 346 were light water reactors (LWRs) totalling 306 GW(e) and 29 were heavy water reactors (HWRs) totalling 15 GW(e). The accumulated experience and lessons learned from these plants are being incorporated into new advanced reactor designs. Utility requirements documents have been formulated to guide these design activities by incorporating this experience, and results from research and development programmes, with the aim of reducing costs and licensing uncertainties by establishing the technical bases for the new designs. Common goals for advanced designs are high availability, user-friendly features, competitive economics and compliance with internationally recognized safety objectives. Large water cooled reactors with power outputs of 1300 MW(e) and above, which possess inherent safety characteristics (e.g. negative Doppler moderator temperature coefficients, and negative moderator void coefficient) and incorporate proven, active engineered systems to accomplish safety functions are being developed. Other designs with power outputs from, for example, 220 MW(e) up to about 1300 MW(e) which also possess inherent safety characteristics and which place more emphasis on utilization of passive safety systems are being developed. Passive systems are based on natural forces and phenomena such as natural convection and gravity, making safety functions less dependent on active systems and components like pumps and diesel generators. In some cases, further experimental tests for the thermohydraulic conditions of interest in advanced designs can provide improved understanding of the phenomena. Further, analytical methods to predict reactor thermohydraulic behaviour can be qualified for use by comparison with the experimental results. These activities should ultimately result in more economical designs. The

  2. Recent Advances in the Analysis of Macromolecular Interactions Using the Matrix-Free Method of Sedimentation in the Analytical Ultracentrifuge

    Directory of Open Access Journals (Sweden)

    Stephen E. Harding

    2015-03-01

    Full Text Available Sedimentation in the analytical ultracentrifuge is a matrix free solution technique with no immobilisation, columns, or membranes required and can be used to study self-association and complex or “hetero”-interactions, stoichiometry, reversibility and interaction strength of a wide variety of macromolecular types and across a very large dynamic range (dissociation constants from 10−12 M to 10−1 M. We extend an earlier review specifically highlighting advances in sedimentation velocity and sedimentation equilibrium in the analytical ultracentrifuge applied to protein interactions and mucoadhesion and to review recent applications in protein self-association (tetanus toxoid, agrin, protein-like carbohydrate association (aminocelluloses, carbohydrate-protein interactions (polysaccharide-gliadin, nucleic-acid protein (G-duplexes, nucleic acid-carbohydrate (DNA-chitosan and finally carbohydrate-carbohydrate (xanthan-chitosan and a ternary polysaccharide complex interactions.

  3. The Status and Promise of Advanced M&V: An Overview of “M&V 2.0” Methods, Tools, and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Franconi, Ellen [Rocky Mountain Inst., Boulder, CO (United States); Gee, Matt [Univ. of Chicago, IL (United States); Goldberg, Miriam [DNV GL, Oslo (Norway); Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Guiterman, Tim [EnergySavvy, Seattle, WA (United States); Li, Michael [U.S. Department of Energy (DOE), Baltimore, MD (United States); Smith, Brian Arthur [Pacific Gas and Electric, San Francisco, CA (United States)

    2017-04-11

    Advanced measurement and verification (M&V) of energy efficiency savings, often referred to as M&V 2.0 or advanced M&V, is currently an object of much industry attention. Thus far, however, there has been a lack of clarity about what techniques M&V 2.0 includes, how those techniques differ from traditional approaches, what the key considerations are for their use, and what value propositions M&V 2.0 presents to different stakeholders. The objective of this paper is to provide background information and frame key discussion points related to advanced M&V. The paper identifies the benefits, methods, and requirements of advanced M&V and outlines key technical issues for applying these methods. It presents an overview of the distinguishing elements of M&V 2.0 tools and of how the industry is addressing needs for tool testing, consistency, and standardization, and it identifies opportunities for collaboration. In this paper, we consider two key features of M&V 2.0: (1) automated analytics that can provide ongoing, near-real-time savings estimates, and (2) increased data granularity in terms of frequency, volume, or end-use detail. Greater data granularity for large numbers of customers, such as that derived from comprehensive implementation of advanced metering infrastructure (AMI) systems, leads to very large data volumes. This drives interest in automated processing systems. It is worth noting, however, that automated processing can provide value even when applied to less granular data, such as monthly consumption data series. Likewise, more granular data, such as interval or end-use data, delivers value with or without automated processing, provided the processing is manageable. But it is the combination of greater data detail with automated processing that offers the greatest opportunity for value. Using M&V methods that capture load shapes together with automated processing1 can determine savings in near-real time to provide stakeholders with more timely and

  4. Methods for Controlling Surface Forces to Increase the Reliability of Advance Ceramics and their Composites Via Colloidal Processing

    National Research Council Canada - National Science Library

    Lange, F. F

    1995-01-01

    ...-range repulsive potential produced by salt additions to dispersed, aqueous slurries. Tech. Rept. #13 shows that the bodies reformed via the VibraForming Method previous developed under this program...

  5. Evaluation of core physics analysis methods for conversion of the INL advanced test reactor to low-enrichment fuel

    International Nuclear Information System (INIS)

    DeHart, M. D.; Chang, G. S.

    2012-01-01

    Computational neutronics studies to support the possible conversion of the ATR to LEU are underway. Simultaneously, INL is engaged in a physics methods upgrade project to put into place modern computational neutronics tools for future support of ATR fuel cycle and experiment analysis. A number of experimental measurements have been performed in the ATRC in support of the methods upgrade project, and are being used to validate the new core physics methods. The current computational neutronics work is focused on performance of scoping calculations for the ATR core loaded with a candidate LEU fuel design. This will serve as independent confirmation of analyses that have been performed previously, and will evaluate some of the new computational methods for analysis of a candidate LEU fuel for ATR. (authors)

  6. WE-EF-303-04: An Advanced Image Processing Method to Improve the Spatial Resolution of Proton Radiographies

    International Nuclear Information System (INIS)

    Rinaldi, I; Parodi, K; Krah, N

    2015-01-01

    Purpose: We present an optimization method to improve the spatial resolution and the water equivalent thickness accuracy of proton radiographies. Methods: The method is designed for imaging systems measuring only the residual range of protons without relying on tracker detectors to determine the beam trajectory before and after the target. Specifically, the method was used for an imaging set-up consisting of a stack of 61 parallel-plate ionization chambers (PPIC) working as a range telescope. The method uses a decomposition approach of the residual range signal measured by the PPIC and constructs subimages with small size pixels geometrically rearranged and appropriately averaged to be merged into a final single radiography. The method was tested using Monte Carlo simulated and experimental proton radiographies of a PMMA step phantom and an anthropomorphic head phantom. Results: For the step phantom, the effective spatial resolution was found to be 4 and 3 times higher than the nominal resolution for the simulated and experimental radiographies, respectively. For the head phantom, a gamma index was calculated to quantify the conformity of the simulated proton radiographies with a digitally reconstructed X-ray radiography convolved with a Gaussian kernel equal to the proton beam spot-size. For DTA=2.5 mm and RD=2.5%, the passing ratio was 100%/85% for the optimized/non-optimized case, respectively. An extension of the method allows reducing the dose given to the patient during radiography acquisition. We show that despite a dose reduction of 25 times (leading to a dose of 0.016 mGy for the current imaging set-up), the image quality of the optimized radiographies remains fairly unaffected for both the simulated and experimental results. Conclusion: The optimization method leads to a significant increase of the spatial resolution allowing recovering image details that are unresolved in non-optimized radiographies. These results represent a major step towards clinical

  7. Broadband Studies of Semsmic Sources at Regional and Teleseismic Distances Using Advanced Time Series Analysis Methods. Volume 1.

    Science.gov (United States)

    1991-03-21

    discussion of spectral factorability and motivations for broadband analysis, the report is subdivided into four main sections. In Section 1.0, we...estimates. The motivation for developing our multi-channel deconvolution method was to gain information about seismic sources, most notably, nuclear...with complex constraints for estimating the rupture history. Such methods (applied mostly to data sets that also include strong rmotion data), were

  8. Neutronics methods, models, and applications at the Idaho National Engineering Laboratory for the advanced neutron source reactor three-element core design

    International Nuclear Information System (INIS)

    Wemple, C.A.; Schnitzler, B.G.; Ryskamp, J.M.

    1995-08-01

    A summary of the methods and models used to perform neutronics analyses on the Advanced Neutron Source reactor three-element core design is presented. The applications of the neutral particle Monte Carlo code MCNP are detailed, as well as the expansion of the static role of MCNP to analysis of fuel cycle depletion calculations. Results to date of these applications are presented also. A summary of the calculations not yet performed is also given to provide a open-quotes to-doclose quotes list if the project is resurrected

  9. Advancing Dose-Response Assessment Methods for Environmental Regulatory Impact Analysis: A Bayesian Belief Network Approach Applied to Inorganic Arsenic.

    Science.gov (United States)

    Zabinski, Joseph W; Garcia-Vargas, Gonzalo; Rubio-Andrade, Marisela; Fry, Rebecca C; Gibson, Jacqueline MacDonald

    2016-05-10

    Dose-response functions used in regulatory risk assessment are based on studies of whole organisms and fail to incorporate genetic and metabolomic data. Bayesian belief networks (BBNs) could provide a powerful framework for incorporating such data, but no prior research has examined this possibility. To address this gap, we develop a BBN-based model predicting birthweight at gestational age from arsenic exposure via drinking water and maternal metabolic indicators using a cohort of 200 pregnant women from an arsenic-endemic region of Mexico. We compare BBN predictions to those of prevailing slope-factor and reference-dose approaches. The BBN outperforms prevailing approaches in balancing false-positive and false-negative rates. Whereas the slope-factor approach had 2% sensitivity and 99% specificity and the reference-dose approach had 100% sensitivity and 0% specificity, the BBN's sensitivity and specificity were 71% and 30%, respectively. BBNs offer a promising opportunity to advance health risk assessment by incorporating modern genetic and metabolomic data.

  10. Large Scale Screening of Low Cost Ferritic Steel Designs For Advanced Ultra Supercritical Boiler Using First Principles Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Lizhi [Tennessee State Univ. Nashville, TN (United States)

    2016-11-29

    Advanced Ultra Supercritical Boiler (AUSC) requires materials that can operate in corrosive environment at temperature and pressure as high as 760°C (or 1400°F) and 5000psi, respectively, while at the same time maintain good ductility at low temperature. We develop automated simulation software tools to enable fast large scale screening studies of candidate designs. While direct evaluation of creep rupture strength and ductility are currently not feasible, properties such as energy, elastic constants, surface energy, interface energy, and stack fault energy can be used to assess their relative ductility and creeping strength. We implemented software to automate the complex calculations to minimize human inputs in the tedious screening studies which involve model structures generation, settings for first principles calculations, results analysis and reporting. The software developed in the project and library of computed mechanical properties of phases found in ferritic steels, many are complex solid solutions estimated for the first time, will certainly help the development of low cost ferritic steel for AUSC.

  11. Advances in the discontinuous Galerkin method: Hybrid schemes and applications to the reactive infiltration instability in an upwelling compacting mantle

    Science.gov (United States)

    Schiemenz, Alan R.

    High-order methods are emerging in the scientific computing community as superior alternatives to the classical finite difference, finite volume, and continuous finite element methods. The discontinuous Galerkin (DG) method in particular combines many of the positive features of all of these methods. This thesis presents two projects involving the DG method. First, a Hybrid scheme is presented, which implements DG areas where the solution is considered smooth, while dropping the order of the scheme elsewhere and implementing a finite volume scheme with high-order, non-oscillatory solution reconstructions suitable for unstructured mesh. Two such reconstructions from the ENO class are considered in the Hybrid. Successful numerical results are presented for nonlinear systems of conservation laws in one dimension. Second, the high-order discontinuous Galerkin and Fourier spectral methods are applied to an application modeling three-phase fluid flow through a porous medium, undergoing solid-fluid reaction due to the reactive infiltration instability (RII). This model incorporates a solid upwelling term and an equation to track the abundance of the reacting mineral orthopyroxene (opx). After validating the numerical discretization, results are given that provide new insight into the formation of melt channels in the Earth's mantle. Mantle heterogeneities are observed to be one catalyst for the development of melt channels, and the dissolution of opx produces interesting bifurcations in the melt channels. An alternative formulation is considered where the mass transfer rate relative to velocity is taken to be infinitely large. In this setting, the stiffest terms are removed, greatly reducing the cost of time integration.

  12. An Experience of Forest Inventory by Photo Interpretation Method Based on Advanced Firmware and Digital Aerial Photographs of New Generation

    Directory of Open Access Journals (Sweden)

    V. I. Arkhipov

    2014-10-01

    Full Text Available The main stages of the developed technology of forest inventory by interpretation method, named «From survey – to project», with the use of modern aerial survey data, special software and hardware are discussed in the paper. A need for development of high-end technology of forest inventory is due to increasing demands of state, business, and civil community for actual and correct information about forests. The tasks of research were: integration software and hardware into single technology, testing on the real object, and development of recommendations for introduction into production and forming of system of preparing specialists for forest interpretation. Positive results of experimental works by measurement and analytical forest interpretation in stereo regime on base of photogrammetric software were obtained by specialists from Russia, Croatia, Belarus, and Sweden. In the technology «From survey – to project», the following instruments are used: photogrammetric complex Vision Map A3, digital photogrammetric system Photomod, program «ESAUL», GIS ArcGIS, special hardware for stereo visualization. Results of testing this technology are shown on example of model territory. Comparison of results of forest inventory obtained by interpretation method and results of control inventory obtained by enumeration method demonstrated that errors of determination of main forest inventory characteristics do not exceed the norms. The advantages of practical use of the technology are shown. It has been noted that forest inventory by interpretation method is a complex psychophysiological process and it requires an attraction of specialists with high qualification on base of special training. It is indicated the necessity of forming system for training forest inventory specialists on interpretation method. The designed and prepared curriculums and training manuals for interpretation method in forestry are listed.

  13. AgriSense-STARS: Advancing Methods of Agricultural Monitoring for Food Security in Smallholder Regions - the Case for Tanzania

    Science.gov (United States)

    Dempewolf, J.; Becker-Reshef, I.; Nakalembe, C. L.; Tumbo, S.; Maurice, S.; Mbilinyi, B.; Ntikha, O.; Hansen, M.; Justice, C. J.; Adusei, B.; Kongo, V.

    2015-12-01

    In-season monitoring of crop conditions provides critical information for agricultural policy and decision making and most importantly for food security planning and management. Nationwide agricultural monitoring in countries dominated by smallholder farming systems, generally relies on extensive networks of field data collectors. In Tanzania, extension agents make up this network and report on conditions across the country, approaching a "near-census". Data is collected on paper which is resource and time intensive, as well as prone to errors. Data quality is ambiguous and there is a general lack of clear and functional feedback loops between farmers, extension agents, analysts and decision makers. Moreover, the data are not spatially explicit, limiting the usefulness for analysis and quality of policy outcomes. Despite significant advances in remote sensing and information communication technologies (ICT) for monitoring agriculture, the full potential of these new tools is yet to be realized in Tanzania. Their use is constrained by the lack of resources, skills and infrastructure to access and process these data. The use of ICT technologies for data collection, processing and analysis is equally limited. The AgriSense-STARS project is developing and testing a system for national-scale in-season monitoring of smallholder agriculture using a combination of three main tools, 1) GLAM-East Africa, an automated MODIS satellite image processing system, 2) field data collection using GeoODK and unmanned aerial vehicles (UAVs), and 3) the Tanzania Crop Monitor, a collaborative online portal for data management and reporting. These tools are developed and applied in Tanzania through the National Food Security Division of the Ministry of Agriculture, Food Security and Cooperatives (MAFC) within a statistically representative sampling framework (area frame) that ensures data quality, representability and resource efficiency.

  14. Validation of multigroup neutron cross sections and calculational methods for the advanced neutron source against the FOEHN critical experiments measurements

    International Nuclear Information System (INIS)

    Smith, L.A.; Gallmeier, F.X.; Gehin, J.C.

    1995-05-01

    The FOEHN critical experiment was analyzed to validate the use of multigroup cross sections and Oak Ridge National Laboratory neutronics computer codes in the design of the Advanced Neutron Source. The ANSL-V 99-group master cross section library was used for all the calculations. Three different critical configurations were evaluated using the multigroup KENO Monte Carlo transport code, the multigroup DORT discrete ordinates transport code, and the multigroup diffusion theory code VENTURE. The simple configuration consists of only the fuel and control elements with the heavy water reflector. The intermediate configuration includes boron endplates at the upper and lower edges of the fuel element. The complex configuration includes both the boron endplates and components in the reflector. Cross sections were processed using modules from the AMPX system. Both 99-group and 20-group cross sections were created and used in two-dimensional models of the FOEHN experiment. KENO calculations were performed using both 99-group and 20-group cross sections. The DORT and VENTURE calculations were performed using 20-group cross sections. Because the simple and intermediate configurations are azimuthally symmetric, these configurations can be explicitly modeled in R-Z geometry. Since the reflector components cannot be modeled explicitly using the current versions of these codes, three reflector component homogenization schemes were developed and evaluated for the complex configuration. Power density distributions were calculated with KENO using 99-group cross sections and with DORT and VENTURE using 20-group cross sections. The average differences between the measured values and the values calculated with the different computer codes range from 2.45 to 5.74%. The maximum differences between the measured and calculated thermal flux values for the simple and intermediate configurations are ∼ 13%, while the average differences are < 8%

  15. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    Science.gov (United States)

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-02-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.

  16. Qualitative Research Methods to Advance Research on Health Inequities among Previously Incarcerated Women Living with HIV in Alabama

    Science.gov (United States)

    Sprague, Courtenay; Scanlon, Michael L.; Pantalone, David W.

    2017-01-01

    Justice-involved HIV-positive women have poor health outcomes that constitute health inequities. Researchers have yet to embrace the range of qualitative methods to elucidate how psychosocial histories are connected to pathways of vulnerability to HIV and incarceration for this key population. We used life course narratives and…

  17. Advanced analytical method of nereistoxin using mixed-mode cationic exchange solid-phase extraction and GC/MS.

    Science.gov (United States)

    Park, Yujin; Choe, Sanggil; Lee, Heesang; Jo, Jiyeong; Park, Yonghoon; Kim, Eunmi; Pyo, Jaesung; Jung, Jee H

    2015-07-01

    Nereistoxin(NTX) was originated from a marine annelid worm Lumbriconereis heteropoda and its analogue pesticides including cartap, bensultap, thiocyclam and thiobensultap have been commonly used in agriculture, because of their low toxicity and high insecticidal activity. However, NTX has been reported about its inhibitory neuro toxicity in human and animal body, by blocking nicotinic acetylcholine receptor and it cause significant neuromuscular toxicity, resulting in respiratory failure. We developed a new method to determine NTX in biological fluid. The method involves mixed-mode cationic exchange based solid phase extraction and gas chromatography/mass spectrometry for final identification and quantitative analysis. The limit of detection and recovery were substantially better than those of other methods using liquid-liquid extraction or headspace solid phase microextraction. The good recoveries (97±14%) in blood samples were obtained and calibration curves over the range 0.05-20 mg/L have R2 values greater than 0.99. The developed method was applied to a fatal case of cartap intoxication of 74 years old woman who ingested cartap hydrochloride for suicide. Cartap and NTX were detected from postmortem specimens and the cause of the death was ruled to be nereistoxin intoxication. The concentrations of NTX were 2.58 mg/L, 3.36 mg/L and 1479.7 mg/L in heart, femoral blood and stomach liquid content, respectively. The heart blood/femoral blood ratio of NTX was 0.76. Copyright © 2015. Published by Elsevier Ireland Ltd.

  18. Recent advances in the spectral green's function method for monoenergetic slab-geometry fixed-source adjoint transport problems in S{sub N} formulation

    Energy Technology Data Exchange (ETDEWEB)

    Curbelo, Jesus P.; Alves Filho, Hermes; Barros, Ricardo C., E-mail: jperez@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Programa de Pos-Graduacao em Modelagem Computacional; Hernandez, Carlos R.G., E-mail: cgh@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)

    2015-07-01

    The spectral Green's function (SGF) method is a numerical method that is free of spatial truncation errors for slab-geometry fixed-source discrete ordinates (S{sub N}) adjoint problems. The method is based on the standard spatially discretized adjoint S{sub N} balance equations and a nonstandard adjoint auxiliary equation expressing the node-average adjoint angular flux, in each discretization node, as a weighted combination of the node-edge outgoing adjoint fluxes. The auxiliary equation contains parameters which act as Green's functions for the cell-average adjoint angular flux. These parameters are determined by means of a spectral analysis which yields the local general solution of the S{sub N} equations within each node of the discretization grid. In this work a number of advances in the SGF adjoint