WorldWideScience

Sample records for methods including applied

  1. Environmental externalities: Applying the concept to Asian coal-based power generation. [Includes external environmental and societal costs and methods of evaluating them

    Energy Technology Data Exchange (ETDEWEB)

    Szpunar, C.B.; Gillette, J.L.

    1993-03-01

    This report examines the concept of environmental externality. It discusses various factors -- the atmospheric transformations, relationship of point-source emissions to ambient air quality, dose-response relationships, applicable cause-and-effect principles, and risk and valuation research -- that are considered by a number of state utilities when they apply the environmental externality concept to energy resource planning. It describes a methodology developed by Argonne National Laboratory for general use in resource planning, in combination with traditional methods that consider the cost of electricity production. Finally, it shows how the methodology can be applied in Indonesia, Thailand, and Taiwan to potential coal-fired power plant projects that will make use of clean coal technologies.

  2. Evaluation on ultrasonic examination methods applied to Ni-base alloy weld including cracks due to stress corrosion cracking found in BWR reactor internal

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Kobayashi, Hiroyuki; Higuchi, Shinichi; Shimizu, Sadato

    2005-01-01

    A Ni-base alloy weld, including cracks due to stress corrosion cracking found in the reactor internal of the oldest BWR in Japan, Tsuruga unit 1, in 1999, was examined by three (3) types of UT method. After this examination, a depth of each crack was confirmed by carrying out a little excavation with a grinder and PT examination by turns until each crack disappeared. Then, the depth measured by the former method was compared with the one measured by the latter method. In this fashion, performances of the UT methods were verified. As a result, a combination of the three types of UT method was found to meet the acceptance criteria given by ASME Sec.XI Appendix VIII, Performance Demonstration for Ultrasonic Examination Systems-Supplement 6. In this paper, the results of the UT examination described above and their evaluation are discussed. (author)

  3. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  4. Methods of applied mathematics

    CERN Document Server

    Hildebrand, Francis B

    1992-01-01

    This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.

  5. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    .... It also incorporates BayesX code, which is particularly useful in nonlinear regression. To demonstrate MCMC sampling from first principles, the author includes worked examples using the R package...

  6. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....

  7. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays...... bounded model-checking and satisfiability modulo theories (SMT) solvers can be used to check these criteria. Voter Experience: Technology profoundly affects the voter experience. These effects need to be measured and the data should be used to make decisions regarding the implementation of the electoral...... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...

  8. [Montessori method applied to dementia - literature review].

    Science.gov (United States)

    Brandão, Daniela Filipa Soares; Martín, José Ignacio

    2012-06-01

    The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.

  9. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  10. Applied Mathematical Methods in Theoretical Physics

    Science.gov (United States)

    Masujima, Michio

    2005-04-01

    All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.

  11. Applied mathematical methods in nuclear thermal hydraulics

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1983-01-01

    Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated

  12. Entropy viscosity method applied to Euler equations

    International Nuclear Information System (INIS)

    Delchini, M. O.; Ragusa, J. C.; Berry, R. A.

    2013-01-01

    The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)

  13. Analytical methods applied to water pollution

    International Nuclear Information System (INIS)

    Baudin, G.

    1977-01-01

    A comparison of different methods applied to water analysis is given. The discussion is limited to the problems presented by inorganic elements, accessible to nuclear activation analysis methods. The following methods were compared: activation analysis: with gamma-ray spectrometry, atomic absorption spectrometry, fluorimetry, emission spectrometry, colorimetry or spectrophotometry, X-ray fluorescence, mass spectrometry, voltametry, polarography or other electrochemical methods, activation analysis-beta measurements. Drinking-water, irrigation waters, sea waters, industrial wastes and very pure waters are the subjects of the investigations. The comparative evaluation is made on the basis of storage of samples, in situ analysis, treatment and concentration, specificity and interference, monoelement or multielement analysis, analysis time and accuracy. The significance of the neutron analysis is shown. (T.G.)

  14. Quality assurance and applied statistics. Method 3

    International Nuclear Information System (INIS)

    1992-01-01

    This German-Industry-Standards-paperback contains the International Standards from the Series ISO 9000 (or, as the case may be, the European Standards from the Series EN 29000) concerning quality assurance and including the already completed supplementary guidelines with ISO 9000- and ISO 9004-section numbers, which have been adopted as German Industry Standards and which are observed and applied world-wide to a great extent. It also includes the German-Industry-Standards ISO 10011 parts 1, 2 and 3 concerning the auditing of quality-assurance systems and the German-Industry-Standard ISO 10012 part 1 concerning quality-assurance demands (confirmation system) for measuring devices. The standards also include English and French versions. They are applicable independent of the user's line of industry and thus constitute basic standards. (orig.) [de

  15. Catalyst support structure, catalyst including the structure, reactor including a catalyst, and methods of forming same

    Science.gov (United States)

    Van Norman, Staci A.; Aston, Victoria J.; Weimer, Alan W.

    2017-05-09

    Structures, catalysts, and reactors suitable for use for a variety of applications, including gas-to-liquid and coal-to-liquid processes and methods of forming the structures, catalysts, and reactors are disclosed. The catalyst material can be deposited onto an inner wall of a microtubular reactor and/or onto porous tungsten support structures using atomic layer deposition techniques.

  16. Microfluidic devices and methods including porous polymer monoliths

    Science.gov (United States)

    Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V

    2014-04-22

    Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

  17. Methods of producing adsorption media including a metal oxide

    Science.gov (United States)

    Mann, Nicholas R; Tranter, Troy J

    2014-03-04

    Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

  18. Uncertainty-driven nuclear data evaluation including thermal (n,α) applied to 59Ni

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.; Rochman, D.

    2017-11-01

    This paper presents a novel approach to the evaluation of nuclear data (ND), combining experimental data for thermal cross sections with resonance parameters and nuclear reaction modeling. The method involves sampling of various uncertain parameters, in particular uncertain components in experimental setups, and provides extensive covariance information, including consistent cross-channel correlations over the whole energy spectrum. The method is developed for, and applied to, 59Ni, but may be used as a whole, or in part, for other nuclides. 59Ni is particularly interesting since a substantial amount of 59Ni is produced in thermal nuclear reactors by neutron capture in 58Ni and since it has a non-threshold (n,α) cross section. Therefore, 59Ni gives a very important contribution to the helium production in stainless steel in a thermal reactor. However, current evaluated ND libraries contain old information for 59Ni, without any uncertainty information. The work includes a study of thermal cross section experiments and a novel combination of this experimental information, giving the full multivariate distribution of the thermal cross sections. In particular, the thermal (n,α) cross section is found to be 12.7 ± . 7 b. This is consistent with, but yet different from, current established values. Further, the distribution of thermal cross sections is combined with reported resonance parameters, and with TENDL-2015 data, to provide full random ENDF files; all of this is done in a novel way, keeping uncertainties and correlations in mind. The random files are also condensed into one single ENDF file with covariance information, which is now part of a beta version of JEFF 3.3. Finally, the random ENDF files have been processed and used in an MCNP model to study the helium production in stainless steel. The increase in the (n,α) rate due to 59Ni compared to fresh stainless steel is found to be a factor of 5.2 at a certain time in the reactor vessel, with a relative

  19. Electronic-projecting Moire method applying CBR-technology

    Science.gov (United States)

    Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.

    2018-01-01

    Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.

  20. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  1. Initiation devices, initiation systems including initiation devices and related methods

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki; Wallace, Ronald S.

    2018-04-10

    Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materials include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.

  2. Applying the Socratic Method to Physics Education

    Science.gov (United States)

    Corcoran, Ed

    2005-04-01

    We have restructured University Physics I and II in accordance with methods that PER has shown to be effective, including a more interactive discussion- and activity-based curriculum based on the premise that developing understanding requires an interactive process in which students have the opportunity to talk through and think through ideas with both other students and the teacher. Studies have shown that in classes implementing this approach to teaching as compared to classes using a traditional approach, students have significantly higher gains on the Force Concept Inventory (FCI). This has been true in UPI. However, UPI FCI results seem to suggest that there is a significant conceptual hole in students' understanding of Newton's Second Law. Two labs in UPI which teach Newton's Second Law will be redesigned replacing more activity with students as a group talking through, thinking through, and answering conceptual questions asked by the TA. The results will be measured by comparing FCI results to those from previous semesters, coupled with interviews. The results will be analyzed, and we will attempt to understand why gains were or were not made.

  3. Applying scrum methods to ITS projects.

    Science.gov (United States)

    2017-08-01

    The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...

  4. Applying Fuzzy Possibilistic Methods on Critical Objects

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2016-01-01

    Providing a flexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a specific cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...... methods to provide a proper searching space for data objects. The membership functions used by each method when dealing with critical objects is also evaluated. Our results show that relaxing the conditions of participation for data objects in as many partitions as they can, is beneficial....

  5. Lavine method applied to three body problems

    International Nuclear Information System (INIS)

    Mourre, Eric.

    1975-09-01

    The methods presently proposed for the three body problem in quantum mechanics, using the Faddeev approach for proving the asymptotic completeness, come up against the presence of new singularities when the potentials considered v(α)(x(α)) for two-particle interactions decay less rapidly than /x(α)/ -2 ; and also when trials are made for solving the problem with a representation space whose dimension for a particle is lower than three. A method is given that allows the mathematical approach to be extended to three body problem, in spite of singularities. Applications are given [fr

  6. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  7. Applying Mixed Methods Techniques in Strategic Planning

    Science.gov (United States)

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  8. [The diagnostic methods applied in mycology].

    Science.gov (United States)

    Kurnatowska, Alicja; Kurnatowski, Piotr

    2008-01-01

    The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.

  9. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  10. Proteomics methods applied to malaria: Plasmodium falciparum

    International Nuclear Information System (INIS)

    Cuesta Astroz, Yesid; Segura Latorre, Cesar

    2012-01-01

    Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.

  11. METHOD OF APPLYING NICKEL COATINGS ON URANIUM

    Science.gov (United States)

    Gray, A.G.

    1959-07-14

    A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.

  12. Versatile Formal Methods Applied to Quantum Information.

    Energy Technology Data Exchange (ETDEWEB)

    Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.

  13. Optimization methods applied to hybrid vehicle design

    Science.gov (United States)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  14. Advanced exergy-based analyses applied to a system including LNG regasification and electricity generation

    Energy Technology Data Exchange (ETDEWEB)

    Morosuk, Tatiana; Tsatsaronis, George; Boyano, Alicia; Gantiva, Camilo [Technische Univ. Berlin (Germany)

    2012-07-01

    Liquefied natural gas (LNG) will contribute more in the future than in the past to the overall energy supply in the world. The paper discusses the application of advanced exergy-based analyses to a recently developed LNG-based cogeneration system. These analyses include advanced exergetic, advanced exergoeconomic, and advanced exergoenvironmental analyses in which thermodynamic inefficiencies (exergy destruction), costs, and environmental impacts have been split into avoidable and unavoidable parts. With the aid of these analyses, the potentials for improving the thermodynamic efficiency and for reducing the overall cost and the overall environmental impact are revealed. The objectives of this paper are to demonstrate (a) the potential for generating electricity while regasifying LNG and (b) some of the capabilities associated with advanced exergy-based methods. The most important subsystems and components are identified, and suggestions for improving them are made. (orig.)

  15. Scanning probe methods applied to molecular electronics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlicek, Niko

    2013-08-01

    Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.

  16. Development of calculation method for one-dimensional kinetic analysis in fission reactors, including feedback effects

    International Nuclear Information System (INIS)

    Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.

    1986-01-01

    The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt

  17. 75 FR 3251 - Applied Materials, Inc., Including On-Site Leased Workers From Adecco Employment Services...

    Science.gov (United States)

    2010-01-20

    ... NSTAR, Austin, Texas. The notice was published in the Federal Register on November 17, 2009 (74 FR 59253... Resources, SQA Services and NSTAR, Austin, TX; Amended Certification Regarding Eligibility To Apply for..., Proactive Business Solution, Inc., Technical Resources, SQA Services, and NSTAR, Austin, Texas, who became...

  18. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  19. Reflections on Mixing Methods in Applied Linguistics Research

    Science.gov (United States)

    Hashemi, Mohammad R.

    2012-01-01

    This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…

  20. A FILTRATION METHOD AND APPARATUS INCLUDING A ROLLER WITH PORES

    DEFF Research Database (Denmark)

    2008-01-01

    The present invention offers a method for separating dry matter from a medium. A separation chamber is at least partly defined by a plurality of rollers (2,7) and is capable of being pressure regulated. At least one of the rollers is a pore roller (7) having a surface with pores allowing permeabi...

  1. Applying homotopy analysis method for solving differential-difference equation

    International Nuclear Information System (INIS)

    Wang Zhen; Zou Li; Zhang Hongqing

    2007-01-01

    In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations

  2. Composite material including nanocrystals and methods of making

    Science.gov (United States)

    Bawendi, Moungi G.; Sundar, Vikram C.

    2010-04-06

    Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.

  3. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  4. Membrane for distillation including nanostructures, methods of making membranes, and methods of desalination and separation

    KAUST Repository

    Lai, Zhiping; Huang, Kuo-Wei; Chen, Wei

    2016-01-01

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.

  5. Membrane for distillation including nanostructures, methods of making membranes, and methods of desalination and separation

    KAUST Repository

    Lai, Zhiping

    2016-01-21

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.

  6. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  7. Multiple shooting applied to robust reservoir control optimization including output constraints on coherent risk measures

    DEFF Research Database (Denmark)

    Codas, Andrés; Hanssen, Kristian G.; Foss, Bjarne

    2017-01-01

    The production life of oil reservoirs starts under significant uncertainty regarding the actual economical return of the recovery process due to the lack of oil field data. Consequently, investors and operators make management decisions based on a limited and uncertain description of the reservoir....... In this work, we propose a new formulation for robust optimization of reservoir well controls. It is inspired by the multiple shooting (MS) method which permits a broad range of parallelization opportunities and output constraint handling. This formulation exploits coherent risk measures, a concept...

  8. Printing method and printer used for applying this method

    NARCIS (Netherlands)

    2006-01-01

    The invention pertains to a method for transferring ink to a receiving material using an inkjet printer having an ink chamber (10) with a nozzle (8) and an electromechanical transducer (16) in cooperative connection with the ink chamber, comprising actuating the transducer to generate a pressure

  9. SDRE control strategy applied to a nonlinear robotic including drive motor

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Jeferson J. de, E-mail: jefersonjl82@gmail.com, E-mail: tusset@utfpr.edu.br, E-mail: fcjanzen@utfpr.edu.br, E-mail: piccirillo@utfpr.edu.br, E-mail: claudinor@utfpr.edu.br; Tusset, Angelo M., E-mail: jefersonjl82@gmail.com, E-mail: tusset@utfpr.edu.br, E-mail: fcjanzen@utfpr.edu.br, E-mail: piccirillo@utfpr.edu.br, E-mail: claudinor@utfpr.edu.br; Janzen, Frederic C., E-mail: jefersonjl82@gmail.com, E-mail: tusset@utfpr.edu.br, E-mail: fcjanzen@utfpr.edu.br, E-mail: piccirillo@utfpr.edu.br, E-mail: claudinor@utfpr.edu.br; Piccirillo, Vinicius, E-mail: jefersonjl82@gmail.com, E-mail: tusset@utfpr.edu.br, E-mail: fcjanzen@utfpr.edu.br, E-mail: piccirillo@utfpr.edu.br, E-mail: claudinor@utfpr.edu.br; Nascimento, Claudinor B., E-mail: jefersonjl82@gmail.com, E-mail: tusset@utfpr.edu.br, E-mail: fcjanzen@utfpr.edu.br, E-mail: piccirillo@utfpr.edu.br, E-mail: claudinor@utfpr.edu.br [UTFPR-PONTA GROSSA, PR (Brazil); Balthazar, José M., E-mail: jmbaltha@rc.unesp.br [UNESP-BAURU, SP (Brazil); Brasil, Reyolando M. L. R. da Fonseca, E-mail: reyolando.brasil@ufabc.edu.br [UFABC-SANTO ANDRE, SP (Brazil)

    2014-12-10

    A robotic control design considering all the inherent nonlinearities of the robot-engine configuration is developed. The interactions between the robot and joint motor drive mechanism are considered. The proposed control combines two strategies, one feedforward control in order to maintain the system in the desired coordinate, and feedback control system to take the system into a desired coordinate. The feedback control is obtained using State-Dependent Riccati Equation (SDRE). For link positioning two cases are considered. Case I: For control positioning, it is only used motor voltage; Case II: For control positioning, it is used both motor voltage and torque between the links. Simulation results, including parametric uncertainties in control shows the feasibility of the proposed control for the considered system.

  10. Dislocation concepts applied to fatigue properties of austenitic stainless steels including time-dependent modes

    Energy Technology Data Exchange (ETDEWEB)

    Tavassoli, A.A.

    1986-10-01

    Dislocation substructures formed in austenitic stainless steel 304L and 316L, fatigued at 673 K, 823 K and 873 K under total imposed strain ranges of 0.7 to 2.25%, and their correlation with mechanical properties have been investigated. In addition substructures formed at lower strain ranges have been examined using foils prepared from parts of the specimens with larger cross-sections. Investigation has also been extended to include the effect of intermittent hold-times up to 1.8 x 10/sup 4/s and sequential creep-fatigue and fatigue-creep. The experimental results obtained are analysed and their implications for current dislocation concepts and mechanical properties are discussed.

  11. Hybrid electrokinetic method applied to mix contaminated soil

    Energy Technology Data Exchange (ETDEWEB)

    Mansour, H.; Maria, E. [Dept. of Building Civil and Environmental Engineering, Concordia Univ., Montreal (Canada)

    2001-07-01

    Several industrials and municipal areas in North America are contaminated with heavy metals and petroleum products. This mix contamination presents a particularly difficult task for remediation when is exposed in clayey soil. The objective of this research was to find a method to cleanup mix contaminated clayey soils. Finally, a multifunctional hybrid electrokinetic method was investigated. Clayey soil was contaminated with lead and nickel (heavy metals) at the level of 1000 ppm and phenanthrene (PAH) of 600 ppm. Electrokinetic surfactant supply system was applied to mobilize, transport and removal of phenanthrene. A chelation agent (EDTA) was also electrokinetically supplied to mobilize heavy metals. The studies were performed on 8 lab scale electrokinetic cells. The mix contaminated clayey soil was subjected to DC total voltage gradient of 0.3 V/cm. Supplied liquids (surfactant and EDTA) were introduced in different periods of time (22 days, 42 days) in order to optimize the most excessive removal of contaminants. The ph, electrical parameters, volume supplied, and volume discharged was monitored continuously during each experiment. At the end of these tests soil and cathalyte were subjected to physico-chemical analysis. The paper discusses results of experiments including the optimal energy use, removal efficiency of phenanthrene, as well, transport and removal of heavy metals. The results of this study can be applied for in-situ hybrid electrokinetic technology to remediate clayey sites contaminated with petroleum product mixed with heavy metals (e.g. manufacture Gas Plant Sites). (orig.)

  12. Discrimination symbol applying method for sintered nuclear fuel product

    International Nuclear Information System (INIS)

    Ishizaki, Jin

    1998-01-01

    The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)

  13. Building "Applied Linguistic Historiography": Rationale, Scope, and Methods

    Science.gov (United States)

    Smith, Richard

    2016-01-01

    In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…

  14. Analytic methods in applied probability in memory of Fridrikh Karpelevich

    CERN Document Server

    Suhov, Yu M

    2002-01-01

    This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable

  15. Applying Mixed Methods Research at the Synthesis Level: An Overview

    Science.gov (United States)

    Heyvaert, Mieke; Maes, Bea; Onghena, Patrick

    2011-01-01

    Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…

  16. Quantitative EEG Applying the Statistical Recognition Pattern Method

    DEFF Research Database (Denmark)

    Engedal, Knut; Snaedal, Jon; Hoegh, Peter

    2015-01-01

    BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...

  17. Applying systems ergonomics methods in sport: A systematic review.

    Science.gov (United States)

    Hulme, Adam; Thompson, Jason; Plant, Katherine L; Read, Gemma J M; Mclean, Scott; Clacy, Amanda; Salmon, Paul M

    2018-04-16

    As sports systems become increasingly more complex, competitive, and technology-centric, there is a greater need for systems ergonomics methods to consider the performance, health, and safety of athletes in context with the wider settings in which they operate. Therefore, the purpose of this systematic review was to identify and critically evaluate studies which have applied a systems ergonomics research approach in the context of sports performance and injury management. Five databases (PubMed, Scopus, ScienceDirect, Web of Science, and SPORTDiscus) were searched for the dates 01 January 1990 to 01 August 2017, inclusive, for original peer-reviewed journal articles and conference papers. Reported analyses were underpinned by a recognised systems ergonomics method, and study aims were related to the optimisation of sports performance (e.g. communication, playing style, technique, tactics, or equipment), and/or the management of sports injury (i.e. identification, prevention, or treatment). A total of seven articles were identified. Two articles were focussed on understanding and optimising sports performance, whereas five examined sports injury management. The methods used were the Event Analysis of Systemic Teamwork, Cognitive Work Analysis (the Work Domain Analysis Abstraction Hierarchy), Rasmussen's Risk Management Framework, and the Systems Theoretic Accident Model and Processes method. The individual sport application was distance running, whereas the team sports contexts examined were cycling, football, Australian Football League, and rugby union. The included systems ergonomics applications were highly flexible, covering both amateur and elite sports contexts. The studies were rated as valuable, providing descriptions of injury controls and causation, the factors influencing injury management, the allocation of responsibilities for injury prevention, as well as the factors and their interactions underpinning sports performance. Implications and future

  18. A Lagrangian meshfree method applied to linear and nonlinear elasticity.

    Science.gov (United States)

    Walker, Wade A

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.

  19. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups.

    Science.gov (United States)

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2016-02-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the

  20. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    Science.gov (United States)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS

  1. Applying the Taguchi method for optimized fabrication of bovine ...

    African Journals Online (AJOL)

    SERVER

    2008-02-19

    Feb 19, 2008 ... Nanobiotechnology Research Lab., School of Chemical Engineering, Babol University of Technology, Po.Box: 484, ... nanoparticle by applying the Taguchi method with characterization of the ... of BSA/ethanol and organic solvent adding rate. ... Sodium aside and all other chemicals were purchased from.

  2. Aircraft operability methods applied to space launch vehicles

    Science.gov (United States)

    Young, Douglas

    1997-01-01

    The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.

  3. Magnetic stirring welding method applied to nuclear power plant

    International Nuclear Information System (INIS)

    Hirano, Kenji; Watando, Masayuki; Morishige, Norio; Enoo, Kazuhide; Yasuda, Yuuji

    2002-01-01

    In construction of a new nuclear power plant, carbon steel and stainless steel are used as base materials for the bottom linear plate of Reinforced Concrete Containment Vessel (RCCV) to achieve maintenance-free requirement, securing sufficient strength of structure. However, welding such different metals is difficult by ordinary method. To overcome the difficulty, the automated Magnetic Stirring Welding (MSW) method that can demonstrate good welding performance was studied for practical use, and weldability tests showed the good results. Based on the study, a new welding device for the MSW method was developed to apply it weld joints of different materials, and it practically used in part of a nuclear power plant. (author)

  4. Methodical Aspects of Applying Strategy Map in an Organization

    OpenAIRE

    Piotr Markiewicz

    2013-01-01

    One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC). The method was c...

  5. Linear algebraic methods applied to intensity modulated radiation therapy.

    Science.gov (United States)

    Crooks, S M; Xing, L

    2001-10-01

    Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

  6. Methods of applied mathematics with a software overview

    CERN Document Server

    Davis, Jon H

    2016-01-01

    This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...

  7. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    Science.gov (United States)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  8. Which DTW Method Applied to Marine Univariate Time Series Imputation

    OpenAIRE

    Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André

    2017-01-01

    International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...

  9. Applying Qualitative Research Methods to Narrative Knowledge Engineering

    OpenAIRE

    O'Neill, Brian; Riedl, Mark

    2014-01-01

    We propose a methodology for knowledge engineering for narrative intelligence systems, based on techniques used to elicit themes in qualitative methods research. Our methodology uses coding techniques to identify actions in natural language corpora, and uses these actions to create planning operators and procedural knowledge, such as scripts. In an iterative process, coders create a taxonomy of codes relevant to the corpus, and apply those codes to each element of that corpus. These codes can...

  10. APPLYING SPECTROSCOPIC METHODS ON ANALYSES OF HAZARDOUS WASTE

    OpenAIRE

    Dobrinić, Julijan; Kunić, Marija; Ciganj, Zlatko

    2000-01-01

    Abstract The paper presents results of measuring the content of heavy and other metals in waste samples from the hazardous waste disposal site of Sovjak near Rijeka. The preliminary design elaboration and the choice of the waste disposal sanification technology were preceded by the sampling and physico-chemical analyses of disposed waste, enabling its categorization. The following spectroscopic methods were applied on metal content analysis: Atomic absorption spectroscopy (AAS) and plas...

  11. A new method of AHP applied to personal credit evaluation

    Institute of Scientific and Technical Information of China (English)

    JIANG Ming-hui; XIONG Qi; CAO Jing

    2006-01-01

    This paper presents a new negative judgment matrix that combines the advantages of the reciprocal judgment matrix and the fuzzy complementary judgment matrix, and then puts forth the properties of this new matrix. In view of these properties, this paper derives a clear sequencing formula for the new negative judgment matrix, which improves the sequencing principle of AHP. Finally, this new method is applied to personal credit evaluation to show its advantages of conciseness and swiftness.

  12. Novel biodosimetry methods applied to victims of the Goiania accident

    International Nuclear Information System (INIS)

    Straume, T.; Langlois, R.G.; Lucas, J.; Jensen, R.H.; Bigbee, W.L.; Ramalho, A.T.; Brandao-Mello, C.E.

    1991-01-01

    Two biodosimetric methods under development at the Lawrence Livermore National Laboratory were applied to five persons accidentally exposed to a 137Cs source in Goiania, Brazil. The methods used were somatic null mutations at the glycophorin A locus detected as missing proteins on the surface of blood erythrocytes and chromosome translocations in blood lymphocytes detected using fluorescence in-situ hybridization. Biodosimetric results obtained approximately 1 y after the accident using these new and largely unvalidated methods are in general agreement with results obtained immediately after the accident using dicentric chromosome aberrations. Additional follow-up of Goiania accident victims will (1) help provide the information needed to validate these new methods for use in biodosimetry and (2) provide independent estimates of dose

  13. Newton-Krylov methods applied to nonequilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Knoll, D.A.; Rider, W.J.; Olsen, G.L.

    1998-01-01

    The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step

  14. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled

  15. GPS surveying method applied to terminal area navigation flight experiments

    Energy Technology Data Exchange (ETDEWEB)

    Murata, M; Shingu, H; Satsushima, K; Tsuji, T; Ishikawa, K; Miyazawa, Y; Uchida, T [National Aerospace Laboratory, Tokyo (Japan)

    1993-03-01

    With an objective of evaluating accuracy of new landing and navigation systems such as microwave landing guidance system and global positioning satellite (GPS) system, flight experiments are being carried out using experimental aircraft. This aircraft mounts a GPS and evaluates its accuracy by comparing the standard orbits spotted by a Kalman filter from the laser tracing data on the aircraft with the navigation results. The GPS outputs position and speed information from an earth-centered-earth-fixed system called the World Geodetic System, 1984 (WGS84). However, in order to compare the navigation results with output from a reference orbit sensor or other navigation sensor, it is necessary to structure a high-precision reference coordinates system based on the WGS84. A method that applies the GPS phase interference measurement for this problem was proposed, and used actually in analyzing a flight experiment data. As referred to a case of the method having been applied to evaluating an independent navigation accuracy, the method was verified sufficiently effective and reliable not only in navigation method analysis, but also in the aspect of navigational operations. 12 refs., 10 figs., 5 tabs.

  16. Analysis of concrete beams using applied element method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

  17. The Lattice Boltzmann Method applied to neutron transport

    International Nuclear Information System (INIS)

    Erasmus, B.; Van Heerden, F. A.

    2013-01-01

    In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)

  18. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  19. Classification of Specialized Farms Applying Multivariate Statistical Methods

    Directory of Open Access Journals (Sweden)

    Zuzana Hloušková

    2017-01-01

    Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.

  20. Metrological evaluation of characterization methods applied to nuclear fuels

    International Nuclear Information System (INIS)

    Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho

    2010-01-01

    In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the

  1. Nuclear and nuclear related analytical methods applied in environmental research

    International Nuclear Information System (INIS)

    Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.

    2010-01-01

    Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)

  2. Analysis of Brick Masonry Wall using Applied Element Method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.

  3. Thermally stimulated current method applied to highly irradiated silicon diodes

    CERN Document Server

    Pintilie, I; Pintilie, I; Moll, Michael; Fretwurst, E; Lindström, G

    2002-01-01

    We propose an improved method for the analysis of Thermally Stimulated Currents (TSC) measured on highly irradiated silicon diodes. The proposed TSC formula for the evaluation of a set of TSC spectra obtained with different reverse biases leads not only to the concentration of electron and hole traps visible in the spectra but also gives an estimation for the concentration of defects which not give rise to a peak in the 30-220 K TSC temperature range (very shallow or very deep levels). The method is applied to a diode irradiated with a neutron fluence of phi sub n =1.82x10 sup 1 sup 3 n/cm sup 2.

  4. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  5. SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, J; Liang, J; Chen, S; Qin, A; Yan, D [Beaumont Health Systeml, Royal Oak, MI (United States)

    2016-06-15

    Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition was used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ

  6. A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Oktay Büyükaşık

    2010-12-01

    Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31

  7. Methodical Aspects of Applying Strategy Map in an Organization

    Directory of Open Access Journals (Sweden)

    Piotr Markiewicz

    2013-06-01

    Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.

  8. A frequency domain linearized Navier-Stokes method including acoustic damping by eddy viscosity using RANS

    Science.gov (United States)

    Holmberg, Andreas; Kierkegaard, Axel; Weng, Chenyang

    2015-06-01

    In this paper, a method for including damping of acoustic energy in regions of strong turbulence is derived for a linearized Navier-Stokes method in the frequency domain. The proposed method is validated and analyzed in 2D only, although the formulation is fully presented in 3D. The result is applied in a study of the linear interaction between the acoustic and the hydrodynamic field in a 2D T-junction, subject to grazing flow at Mach 0.1. Part of the acoustic energy at the upstream edge of the junction is shed as harmonically oscillating disturbances, which are conveyed across the shear layer over the junction, where they interact with the acoustic field. As the acoustic waves travel in regions of strong shear, there is a need to include the interaction between the background turbulence and the acoustic field. For this purpose, the oscillation of the background turbulence Reynold's stress, due to the acoustic field, is modeled using an eddy Newtonian model assumption. The time averaged flow is first solved for using RANS along with a k-ε turbulence model. The spatially varying turbulent eddy viscosity is then added to the spatially invariant kinematic viscosity in the acoustic set of equations. The response of the 2D T-junction to an incident acoustic field is analyzed via a plane wave scattering matrix model, and the result is compared to experimental data for a T-junction of rectangular ducts. A strong improvement in the agreement between calculation and experimental data is found when the modification proposed in this paper is implemented. Discrepancies remaining are likely due to inaccuracies in the selected turbulence model, which is known to produce large errors e.g. for flows with significant rotation, which the grazing flow across the T-junction certainly is. A natural next step is therefore to test the proposed methodology together with more sophisticated turbulence models.

  9. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    Science.gov (United States)

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  10. Analytical methods applied to diverse types of Brazilian propolis

    Directory of Open Access Journals (Sweden)

    Marcucci Maria

    2011-06-01

    Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.

  11. Teaching organization theory for healthcare management: three applied learning methods.

    Science.gov (United States)

    Olden, Peter C

    2006-01-01

    Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.

  12. Six Sigma methods applied to cryogenic coolers assembly line

    Science.gov (United States)

    Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René

    2009-05-01

    Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.

  13. Composite materials and bodies including silicon carbide and titanium diboride and methods of forming same

    Science.gov (United States)

    Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek

    2013-01-22

    Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.

  14. Metrological evaluation of characterization methods applied to nuclear fuels

    Energy Technology Data Exchange (ETDEWEB)

    Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho, E-mail: kellyfisica@gmail.co, E-mail: fernando.lameiras@pq.cnpq.b, E-mail: dmc@cdtn.b, E-mail: ranf@cdtn.b, E-mail: flmigliorini@hotmail.co, E-mail: lucsc@hotmail.co, E-mail: egonn@ufmg.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2010-07-01

    In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO{sub 2} that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO{sub 2} samples were focused. The thermal characterization of UO{sub 2} samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of

  15. Applying flow chemistry: methods, materials, and multistep synthesis.

    Science.gov (United States)

    McQuade, D Tyler; Seeberger, Peter H

    2013-07-05

    The synthesis of complex molecules requires control over both chemical reactivity and reaction conditions. While reactivity drives the majority of chemical discovery, advances in reaction condition control have accelerated method development/discovery. Recent tools include automated synthesizers and flow reactors. In this Synopsis, we describe how flow reactors have enabled chemical advances in our groups in the areas of single-stage reactions, materials synthesis, and multistep reactions. In each section, we detail the lessons learned and propose future directions.

  16. Modal method for crack identification applied to reactor recirculation pump

    International Nuclear Information System (INIS)

    Miller, W.H.; Brook, R.

    1991-01-01

    Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data

  17. Applying Nyquist's method for stability determination to solar wind observations

    Science.gov (United States)

    Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.

    2017-10-01

    The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.

  18. Progress report of Physics Division including Applied Mathematics and Computing Section. 1st April 1971 - 30th September 1971

    International Nuclear Information System (INIS)

    2004-01-01

    All the mechanical and electronic components for the zero power splitable machine (the critical facility) arrived in excellent condition from France. Installation began and good progress was made on the mechanical side where the base and tables were successfully assembled and are being adjusted to meet the exacting specification. Power transients arising from the insertion of short reactivity steps were studied for the reactors, HIFAR, MOATA and the critical facility. Some effort was also devoted to the study of blowdown accidents in light water reactors and calculations of some Italian experiments were made successfully. The measurements of fast fission factor and initial conversion ratios for a range of natural uranium heavy water reactors were completed, and good progress is being made with neutron streaming in aluminium-water lattices. Many other investigators of this problem appear to have neglected or given insufficient attention to the case where the neutron beam is parallel to the plates. It is difficult to fit a cosine curve uniquely as coarse and fine features can not be separated. Previous analysis of the moisture content of soils and concrete by neutron scattering was successfully applied to obtain information on the variation of the moisture in large coal stacks as a function of time. This work was done in conjunction with Electricity Commission of N.S.W. Although a small Pu/Be source was found adequate for the above work, development continued on producing neutron pulses by means of a coaxial plasma focus device. Neutron pulses were produced regularly, but the output was variable; the fault was traced to breakdowns at the breech end of the device where restriking occurs. Although discrepancies of about 2% exist between V-bar for spontaneous fission of 252 Cf as measured by the liquid scintillation method and by the Manganese bath method, this important quantity is being measured locally using the liquid scintillator method. Preliminary results suggest

  19. Applied statistical methods in agriculture, health and life sciences

    CERN Document Server

    Lawal, Bayo

    2014-01-01

    This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.

  20. The virtual fields method applied to spalling tests on concrete

    Directory of Open Access Journals (Sweden)

    Forquin P.

    2012-08-01

    Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.

  1. Solar cells, structures including organometallic halide perovskite monocrystalline films, and methods of preparation thereof

    KAUST Repository

    Bakr, Osman; Peng, Wei; Wang, Lingfei

    2017-01-01

    Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making

  2. Applying sociodramatic methods in teaching transition to palliative care.

    Science.gov (United States)

    Baile, Walter F; Walters, Rebecca

    2013-03-01

    We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  3. A novel method of including Landau level mixing in numerical studies of the quantum Hall effect

    International Nuclear Information System (INIS)

    Wooten, Rachel; Quinn, John; Macek, Joseph

    2013-01-01

    Landau level mixing should influence the quantum Hall effect for all except the strongest applied magnetic fields. We propose a simple method for examining the effects of Landau level mixing by incorporating multiple Landau levels into the Haldane pseudopotentials through exact numerical diagonalization. Some of the resulting pseudopotentials for the lowest and first excited Landau levels will be presented

  4. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  5. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups

    NARCIS (Netherlands)

    Harst-Wintraecken, van der Eugenie; Potting, José; Kroeze, Carolien

    2016-01-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of

  6. 75 FR 452 - Applied Materials, Inc. Including On-Site Leased Workers From Adecco Employment Services, Aerotek...

    Science.gov (United States)

    2010-01-05

    ... NSTAR, Austin, Texas. The notice was published in the Federal Register on November 17, 2009 (74 FR 59253... Resources, SQA Services and NSTAR; Austin, TX; Amended Certification Regarding Eligibility To Apply for... NSTAR, Austin, Texas, who became totally or partially separated from employment on or after June 25...

  7. Systems and Methods for Fabricating Structures Including Metallic Glass-Based Materials Using Low Pressure Casting

    Science.gov (United States)

    Hofmann, Douglas C. (Inventor); Kennett, Andrew (Inventor)

    2018-01-01

    Systems and methods to fabricate objects including metallic glass-based materials using low-pressure casting techniques are described. In one embodiment, a method of fabricating an object that includes a metallic glass-based material includes: introducing molten alloy into a mold cavity defined by a mold using a low enough pressure such that the molten alloy does not conform to features of the mold cavity that are smaller than 100 microns; and cooling the molten alloy such that it solidifies, the solid including a metallic glass-based material.

  8. Valuing national effects of digital health investments: an applied method.

    Science.gov (United States)

    Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad

    2015-01-01

    This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.

  9. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  10. A method for the computation of turbulent polymeric liquids including hydrodynamic interactions and chain entanglements

    Energy Technology Data Exchange (ETDEWEB)

    Kivotides, Demosthenes, E-mail: demosthenes.kivotides@strath.ac.uk

    2017-02-12

    An asymptotically exact method for the direct computation of turbulent polymeric liquids that includes (a) fully resolved, creeping microflow fields due to hydrodynamic interactions between chains, (b) exact account of (subfilter) residual stresses, (c) polymer Brownian motion, and (d) direct calculation of chain entanglements, is formulated. Although developed in the context of polymeric fluids, the method is equally applicable to turbulent colloidal dispersions and aerosols. - Highlights: • An asymptotically exact method for the computation of polymer and colloidal fluids is developed. • The method is valid for all flow inertia and all polymer volume fractions. • The method models entanglements and hydrodynamic interactions between polymer chains.

  11. Variational methods applied to problems of diffusion and reaction

    CERN Document Server

    Strieder, William

    1973-01-01

    This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...

  12. Solar cells, structures including organometallic halide perovskite monocrystalline films, and methods of preparation thereof

    KAUST Repository

    Bakr, Osman M.

    2017-03-02

    Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making organometallic halide perovskite monocrystalline film, and the like.

  13. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    Several different methods have been used to sense load currents and extract its ... in order to produce a reference current in shunt active power filters (SAPF), and ... technique compared to other similar methods are found quite satisfactory by ...

  14. Muon radiography method for fundamental and applied research

    Science.gov (United States)

    Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.

    2017-12-01

    This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.

  15. Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.

    Science.gov (United States)

    Reyes Santos, Joost; Haimes, Yacov Y

    2004-06-01

    The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model

  16. Classical and modular methods applied to Diophantine equations

    NARCIS (Netherlands)

    Dahmen, S.R.

    2008-01-01

    Deep methods from the theory of elliptic curves and modular forms have been used to prove Fermat's last theorem and solve other Diophantine equations. These so-called modular methods can often benefit from information obtained by other, classical, methods from number theory; and vice versa. In our

  17. The pseudo-harmonics method applied to depletion calculation

    International Nuclear Information System (INIS)

    Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.

    1989-01-01

    In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt

  18. Waste classification and methods applied to specific disposal sites

    International Nuclear Information System (INIS)

    Rogers, V.C.

    1979-01-01

    An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs

  19. nuclear and atomic methods applied in the determination of some

    African Journals Online (AJOL)

    NAA is a quantitative and qualitative method for the precise determination of a number of major, minor and trace elements in different types of geological, environmental and biological samples. It is based on nuclear reaction between neutron and target nuclei of a sample material. It is a useful method for the simultaneous.

  20. Instructions for applying inverse method for reactivity measurement

    International Nuclear Information System (INIS)

    Milosevic, M.

    1988-11-01

    This report is a brief description of the completed method for reactivity measurement. It contains description of the experimental procedure needed instrumentation and computer code IM for determining reactivity. The objective of this instructions manual is to enable experiments and reactivity measurement on any critical system according to the methods adopted at the RB reactor

  1. Force measuring valve assemblies, systems including such valve assemblies and related methods

    Science.gov (United States)

    DeWall, Kevin George [Pocatello, ID; Garcia, Humberto Enrique [Idaho Falls, ID; McKellar, Michael George [Idaho Falls, ID

    2012-04-17

    Methods of evaluating a fluid condition may include stroking a valve member and measuring a force acting on the valve member during the stroke. Methods of evaluating a fluid condition may include measuring a force acting on a valve member in the presence of fluid flow over a period of time and evaluating at least one of the frequency of changes in the measured force over the period of time and the magnitude of the changes in the measured force over the period of time to identify the presence of an anomaly in a fluid flow and, optionally, its estimated location. Methods of evaluating a valve condition may include directing a fluid flow through a valve while stroking a valve member, measuring a force acting on the valve member during the stroke, and comparing the measured force to a reference force. Valve assemblies and related systems are also disclosed.

  2. The spectral volume method as applied to transport problems

    International Nuclear Information System (INIS)

    McClarren, Ryan G.

    2011-01-01

    We present a new spatial discretization for transport problems: the spectral volume method. This method, rst developed by Wang for computational fluid dynamics, divides each computational cell into several sub-cells and enforces particle balance on each of these sub-cells. Also, these sub-cells are used to build a polynomial reconstruction in the cell. The idea of dividing cells into many cells is a generalization of the simple corner balance and other similar schemes. The spectral volume method preserves particle conservation and preserves the asymptotic diffusion limit. We present results from the method on two transport problems in slab geometry using discrete ordinates and second through sixth order spectral volume schemes. The numerical results demonstrate the accuracy and preservation of the diffusion limit of the spectral volume method. Future work will explore possible bene ts of the scheme for high-performance computing and for resolving diffusive boundary layers. (author)

  3. Literature Review of Applying Visual Method to Understand Mathematics

    Directory of Open Access Journals (Sweden)

    Yu Xiaojuan

    2015-01-01

    Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.

  4. Electrode assemblies, plasma apparatuses and systems including electrode assemblies, and methods for generating plasma

    Science.gov (United States)

    Kong, Peter C; Grandy, Jon D; Detering, Brent A; Zuck, Larry D

    2013-09-17

    Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating member to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.

  5. Applying a life cycle approach to project management methods

    OpenAIRE

    Biggins, David; Trollsund, F.; Høiby, A.L.

    2016-01-01

    Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...

  6. Method for curing alkyd resin compositions by applying ionizing radiation

    International Nuclear Information System (INIS)

    Watanabe, T.; Murata, K.; Maruyama, T.

    1975-01-01

    An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)

  7. The integral equation method applied to eddy currents

    International Nuclear Information System (INIS)

    Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.

    1976-04-01

    An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)

  8. Including mixed methods research in systematic reviews: Examples from qualitative syntheses in TB and malaria control

    Science.gov (United States)

    2012-01-01

    Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681

  9. System and method for detecting components of a mixture including a valving scheme for competition assays

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.

    2017-09-19

    Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.

  10. System and method for detecting components of a mixture including a valving scheme for competition assays

    Science.gov (United States)

    Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.

    2017-07-11

    Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.

  11. Electromagnetic Radiation : Variational Methods, Waveguides and Accelerators Including seminal papers of Julian Schwinger

    CERN Document Server

    Milton, Kimball A

    2006-01-01

    This is a graduate level textbook on the theory of electromagnetic radiation and its application to waveguides, transmission lines, accelerator physics and synchrotron radiation. It has grown out of lectures and manuscripts by Julian Schwinger prepared during the war at MIT's Radiation Laboratory, updated with material developed by Schwinger at UCLA in the 1970s and 1980s, and by Milton at the University of Oklahoma since 1994. The book includes a great number of straightforward and challenging exercises and problems. It is addressed to students in physics, electrical engineering, and applied mathematics seeking a thorough introduction to electromagnetism with emphasis on radiation theory and its applications.

  12. A novel technique for including surface tension in PLIC-VOF methods

    Energy Technology Data Exchange (ETDEWEB)

    Meier, M.; Yadigaroglu, G. [Swiss Federal Institute of Technology, Nuclear Engineering Lab. ETH-Zentrum, CLT, Zurich (Switzerland); Smith, B. [Paul Scherrer Inst. (PSI), Villigen (Switzerland). Lab. for Thermal-Hydraulics

    2002-02-01

    Various versions of Volume-of-Fluid (VOF) methods have been used successfully for the numerical simulation of gas-liquid flows with an explicit tracking of the phase interface. Of these, Piecewise-Linear Interface Construction (PLIC-VOF) appears as a fairly accurate, although somewhat more involved variant. Including effects due to surface tension remains a problem, however. The most prominent methods, Continuum Surface Force (CSF) of Brackbill et al. and the method of Zaleski and co-workers (both referenced later), both induce spurious or 'parasitic' currents, and only moderate accuracy in regards to determining the curvature. We present here a new method to determine curvature accurately using an estimator function, which is tuned with a least-squares-fit against reference data. Furthermore, we show how spurious currents may be drastically reduced using the reconstructed interfaces from the PLIC-VOF method. (authors)

  13. Apply of torque method at rationalization of work

    Directory of Open Access Journals (Sweden)

    Bandurová Miriam

    2001-03-01

    Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.

  14. Thermoluminescence as a dating method applied to the Morocco Neolithic

    International Nuclear Information System (INIS)

    Ousmoi, M.

    1989-09-01

    Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr

  15. Methods of using structures including catalytic materials disposed within porous zeolite materials to synthesize hydrocarbons

    Science.gov (United States)

    Rollins, Harry W [Idaho Falls, ID; Petkovic, Lucia M [Idaho Falls, ID; Ginosar, Daniel M [Idaho Falls, ID

    2011-02-01

    Catalytic structures include a catalytic material disposed within a zeolite material. The catalytic material may be capable of catalyzing a formation of methanol from carbon monoxide and/or carbon dioxide, and the zeolite material may be capable of catalyzing a formation of hydrocarbon molecules from methanol. The catalytic material may include copper and zinc oxide. The zeolite material may include a first plurality of pores substantially defined by a crystal structure of the zeolite material and a second plurality of pores dispersed throughout the zeolite material. Systems for synthesizing hydrocarbon molecules also include catalytic structures. Methods for synthesizing hydrocarbon molecules include contacting hydrogen and at least one of carbon monoxide and carbon dioxide with such catalytic structures. Catalytic structures are fabricated by forming a zeolite material at least partially around a template structure, removing the template structure, and introducing a catalytic material into the zeolite material.

  16. Boron autoradiography method applied to the study of steels

    International Nuclear Information System (INIS)

    Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.

    1986-01-01

    The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es

  17. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  18. Efficient electronic structure methods applied to metal nanoparticles

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth

    of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...

  19. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  20. Including mixed methods research in systematic reviews: examples from qualitative syntheses in TB and malaria control.

    Science.gov (United States)

    Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen

    2012-04-30

    Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.

  1. Non-perturbative methods applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1982-09-01

    The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt

  2. On second quantization methods applied to classical statistical mechanics

    International Nuclear Information System (INIS)

    Matos Neto, A.; Vianna, J.D.M.

    1984-01-01

    A method of expressing statistical classical results in terms of mathematical entities usually associated to quantum field theoretical treatment of many particle systems (Fock space, commutators, field operators, state vector) is discussed. It is developed a linear response theory using the 'second quantized' Liouville equation introduced by Schonberg. The relationship of this method to that of Prigogine et al. is briefly analyzed. The chain of equations and the spectral representations for the new classical Green's functions are presented. Generalized operators defined on Fock space are discussed. It is shown that the correlation functions can be obtained from Green's functions defined with generalized operators. (Author) [pt

  3. Physiology for engineers applying engineering methods to physiological systems

    CERN Document Server

    Chappell, Michael

    2016-01-01

    This book provides an introduction to qualitative and quantitative aspects of human physiology. It looks at biological and physiological processes and phenomena, including a selection of mathematical models, showing how physiological problems can be mathematically formulated and studied. It also illustrates how a wide range of engineering and physics topics, including electronics, fluid dynamics, solid mechanics and control theory can be used to describe and understand physiological processes and systems. Throughout the text there are introductions to measuring and quantifying physiological processes using both signal and imaging technologies. Physiology for Engineers describes the basic structure and models of cellular systems, the structure and function of the cardiovascular system, the electrical and mechanical activity of the heart and provides an overview of the structure and function of the respiratory and nervous systems. It also includes an introduction to the basic concepts and applications of reacti...

  4. Review of PCMS and heat transfer enhancement methods applied ...

    African Journals Online (AJOL)

    Most available PCMs have low thermal conductivity making heat transfer enhancement necessary for power applications. The various methods of heat transfer enhancement in latent heat storage systems were also reviewed systematically. The review showed that three commercially - available PCMs are suitable in the ...

  5. E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS

    Directory of Open Access Journals (Sweden)

    GOANTA Adrian Mihai

    2011-11-01

    Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.

  6. Multidisciplinary study: DCD method applied to patients with eating disorders

    Directory of Open Access Journals (Sweden)

    Marina Conese

    2009-06-01

    Full Text Available Eating disorders are quite common in clinical practice and can include out-of-control behaviours and thoughts that powerfully reinforce unhealthy eating patterns. They include anorexia nervosa and bulimia nervosa and Binge Eating Disorder. We conducted a trial on 102 patients (89 females and 13 males to investigate the efficacy of “DCD method” (appropriate dietary education associated to New-Electrosculpture on patients with obesity and eating disorders. The study underlines the efficacy of “DCD method”, especially when supported by behavioural therapy, in obese and overweight patients.

  7. A new effective Monte Carlo Midway coupling method in MCNP applied to a well logging problem

    Energy Technology Data Exchange (ETDEWEB)

    Serov, I.V.; John, T.M.; Hoogenboom, J.E

    1998-12-01

    The background of the Midway forward-adjoint coupling method including the black absorber technique for efficient Monte Carlo determination of radiation detector responses is described. The method is implemented in the general purpose MCNP Monte Carlo code. The utilization of the method is fairly straightforward and does not require any substantial extra expertise. The method was applied to a standard neutron well logging porosity tool problem. The results exhibit reliability and high efficiency of the Midway method. For the studied problem the efficiency gain is considerably higher than for a normal forward calculation, which is already strongly optimized by weight-windows. No additional effort is required to adjust the Midway model if the position of the detector or the porosity of the formation is changed. Additionally, the Midway method can be used with other variance reduction techniques if extra gain in efficiency is desired.

  8. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  9. Probabilist methods applied to electric source problems in nuclear safety

    International Nuclear Information System (INIS)

    Carnino, A.; Llory, M.

    1979-01-01

    Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr

  10. Progress report of Physics Division including Applied Mathematics and Computing Section. 1st October 1970 - 31st March 1971

    International Nuclear Information System (INIS)

    2004-01-01

    - the Critical Facility - have been assembled in France, where they are undergoing pre-shipment tests. No major problems have been reported. Civil engineering work on the cell to house the machine is well advanced and should be complete before the equipment arrives in August. A number of nuclear techniques are being considered for problems related to raw materials. These include photonuclear determination of heavy water, alpha backscattering determination of heavy minerals and the delayed neutron determination of fissile materials (author)

  11. Applying probabilistic methods for assessments and calculations for accident prevention

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de

  12. The colour analysis method applied to homogeneous rocks

    Directory of Open Access Journals (Sweden)

    Halász Amadé

    2015-12-01

    Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.

  13. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  14. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  15. Evaluation of Slow Release Fertilizer Applying Chemical and Spectroscopic methods

    International Nuclear Information System (INIS)

    AbdEl-Kader, A.A.; Al-Ashkar, E.A.

    2005-01-01

    Controlled-release fertilizer offers a number of advantages in relation to crop production in newly reclaimed soils. Butadiene styrene latex emulsion is one of the promising polymer for different purposes. In this work, laboratory evaluation of butadiene styrene latex emulsion 24/76 polymer loaded with a mixed fertilizer was carried out. Macro nutrients (N, P and K) and micro-nutrients(Zn, Fe, and Cu) were extracted by basic extract from the polymer fertilizer mixtures. Micro-sampling technique was investigated and applied to measure Zn, Fe, and Cu using flame atomic absorption spectrometry in order to overcome the nebulization difficulties due to high salt content samples. The cumulative releases of macro and micro-nutrients have been assessed. From the obtained results, it is clear that the release depends on both nutrients and polymer concentration in the mixture. Macro-nutrients are released more efficient than micro-nutrients of total added. Therefore it can be used for minimizing micro-nutrients hazard in soils

  16. The lumped heat capacity method applied to target heating

    OpenAIRE

    Rickards, J.

    2013-01-01

    The temperature of metal samples was measured while they were bombarded by the beam from the a particle accelerator. The evolution of the temperature with time can be explained using the lumped heat capacity method of heat transfer. A strong dependence on the type of mounting was found. Se midió la temperatura de muestras metálicas al ser bombardeadas por el haz de iones del Acelerador Pelletron del Instituto de Física. La evolución de la temperatura con el tiempo se puede explicar usando ...

  17. Development of a quantitative safety assessment method for nuclear I and C systems including human operators

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2004-02-01

    Conventional PSA (probabilistic safety analysis) is performed in the framework of event tree analysis and fault tree analysis. In conventional PSA, I and C systems and human operators are assumed to be independent for simplicity. But, the dependency of human operators on I and C systems and the dependency of I and C systems on human operators are gradually recognized to be significant. I believe that it is time to consider the interdependency between I and C systems and human operators in the framework of PSA. But, unfortunately it seems that we do not have appropriate methods for incorporating the interdependency between I and C systems and human operators in the framework of Pasa. Conventional human reliability analysis (HRA) methods are not developed to consider the interdependecy, and the modeling of the interdependency using conventional event tree analysis and fault tree analysis seem to be, event though is does not seem to be impossible, quite complex. To incorporate the interdependency between I and C systems and human operators, we need a new method for HRA and a new method for modeling the I and C systems, man-machine interface (MMI), and human operators for quantitative safety assessment. As a new method for modeling the I and C systems, MMI and human operators, I develop a new system reliability analysis method, reliability graph with general gates (RGGG), which can substitute conventional fault tree analysis. RGGG is an intuitive and easy-to-use method for system reliability analysis, while as powerful as conventional fault tree analysis. To demonstrate the usefulness of the RGGG method, it is applied to the reliability analysis of Digital Plant Protection System (DPPS), which is the actual plant protection system of Ulchin 5 and 6 nuclear power plants located in Republic of Korea. The latest version of the fault tree for DPPS, which is developed by the Integrated Safety Assessment team in Korea Atomic Energy Research Institute (KAERI), consists of 64

  18. Modern analytic methods applied to the art and archaeology

    International Nuclear Information System (INIS)

    Tenorio C, M. D.; Longoria G, L. C.

    2010-01-01

    The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)

  19. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  20. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  1. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    Science.gov (United States)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  2. Frequency domain methods applied to forecasting electricity markets

    International Nuclear Information System (INIS)

    Trapero, Juan R.; Pedregal, Diego J.

    2009-01-01

    The changes taking place in electricity markets during the last two decades have produced an increased interest in the problem of forecasting, either load demand or prices. Many forecasting methodologies are available in the literature nowadays with mixed conclusions about which method is most convenient. This paper focuses on the modeling of electricity market time series sampled hourly in order to produce short-term (1 to 24 h ahead) forecasts. The main features of the system are that (1) models are of an Unobserved Component class that allow for signal extraction of trend, diurnal, weekly and irregular components; (2) its application is automatic, in the sense that there is no need for human intervention via any sort of identification stage; (3) the models are estimated in the frequency domain; and (4) the robustness of the method makes possible its direct use on both load demand and price time series. The approach is thoroughly tested on the PJM interconnection market and the results improve on classical ARIMA models. (author)

  3. Interesting Developments in Testing Methods Applied to Foundation Piles

    Science.gov (United States)

    Sobala, Dariusz; Tkaczyński, Grzegorz

    2017-10-01

    Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.

  4. The Strategic Analysis of Enterprise Applying the SWOT Methods

    Directory of Open Access Journals (Sweden)

    Kotsiubivska Kateryna I.

    2017-10-01

    Full Text Available The article is aimed at forming a list of the following SWOT factors: defining the main factors of influence of the external and the internal environment through an expert survey. The algorithm for conducting an expert survey has been clarified, allocating the main factors that, according to the survey results, have reached the maximum level of consensus among experts. In view of results of the survey conducted, i.e., their mathematical processing, we believe that it is appropriate to highlight the importance of such factors as the efficient management of the enterprise’s capital (including the optimization of its structure, the insufficient amount of financial resources (including the own, budget-based and foreign investment, and unpreparedness to risk on the part of managers. In the case in question, the experts agreed on the importance of the State participation in the overall development of the cultural sector in Ukraine. Prospect for further research will be formation of a SWOT matrix, which will enhance the efficiency of strategic management of financial resources of the enterprises in the cultural area, which will also provide consideration of cultural specificities, give the possibility of structuring the financial resources and allow the successful operation in the market conditions.

  5. Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters

    DEFF Research Database (Denmark)

    Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.

    2013-01-01

    the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms......The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...

  6. Applying Simulation Method in Formulation of Gluten-Free Cookies

    Directory of Open Access Journals (Sweden)

    Nikitina Marina

    2017-01-01

    Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.

  7. Nuclear method applied in archaeological sites at the Amazon basin

    International Nuclear Information System (INIS)

    Nicoli, Ieda Gomes; Bernedo, Alfredo Victor Bellido; Latini, Rose Mary

    2002-01-01

    The aim of this work was to use the nuclear methodology to character pottery discovered inside archaeological sites recognized with circular earth structure in Acre State - Brazil which may contribute to the research in the reconstruction of part of the pre-history of the Amazonic Basin. The sites are located mainly in the Hydrographic Basin of High Purus River. Three of them were strategic chosen to collect the ceramics: Lobao, in Sena Madureira County at north; Alto Alegre in Rio Branco County at east and Xipamanu I, in Xapuri County at south. Neutron Activation Analysis in conjunction with multivariate statistical methods were used for the ceramic characterization and classification. An homogeneous group was established by all the sherds collected from Alto Alegre and was distinct from the other two groups analyzed. Some of the sherds collected from Xipamunu I appeared in Lobao's urns, probably because they had the same fabrication process. (author)

  8. Applying Multi-Criteria Analysis Methods for Fire Risk Assessment

    Directory of Open Access Journals (Sweden)

    Pushkina Julia

    2015-11-01

    Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.

  9. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  10. Applying Human-Centered Design Methods to Scientific Communication Products

    Science.gov (United States)

    Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.

    2016-12-01

    Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.

  11. Translation Methods Applied in Translating Quotations in “the Secret” by Rhonda

    OpenAIRE

    FEBRIANTI, VICKY

    2014-01-01

    Keywords: Translation Methods, The Secret, Quotations.Translation helps human to get information written in any language evenwhen it is written in foreign languages. Therefore translation happens in printed media. Books have been popular printed media. The Secret written by Rhonda Byrne is a popular self-help book which has been translated into 50 languages including Indonesian (“The Secret”, n.d., para.5-6).This study is meant to find out the translation methods applied in The Secret. The wr...

  12. Simplified Methods Applied to Nonlinear Motion of Spar Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haslum, Herbjoern Alf

    2000-07-01

    Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft

  13. Earthquake analysis of structures including structure-soil interaction by a substructure method

    International Nuclear Information System (INIS)

    Chopra, A.K.; Guttierrez, J.A.

    1977-01-01

    A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure method eliminates the deconvolution calculations and the related assumption -regarding type and direction of earthquake waves- required in the direct method. The substructure method is computationally efficient because the two substructures-the structure and the soil region- are analyzed separately; and, more important, it permits taking advantage of the important feature that response to earthquake ground motion is essentially contained in the lower few natural modes of vibration of the structure on fixed base. For sites where essentially similar soils extend to large depths and there is no obvious rigid boundary such as a soil-rock interface, numerical results for earthquake response of a nuclear reactor structure are presented to demonstrate that the commonly used finite element method may lead to unacceptable errors; but the substructure method leads to reliable results

  14. Nondestructive methods of analysis applied to oriental swords

    Directory of Open Access Journals (Sweden)

    Edge, David

    2015-12-01

    Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.

  15. Perturbation Method of Analysis Applied to Substitution Measurements of Buckling

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-11-15

    Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.

  16. Complexity methods applied to turbulence in plasma astrophysics

    Science.gov (United States)

    Vlahos, L.; Isliker, H.

    2016-09-01

    In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the

  17. Spine surgeon's kinematics during discectomy, part II: operating table height and visualization methods, including microscope.

    Science.gov (United States)

    Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun

    2014-05-01

    Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method

  18. Earthquake analysis of structures including structure-soil interaction by a substructure method

    International Nuclear Information System (INIS)

    Chopra, A.K.; Guttierrez, J.A.

    1977-01-01

    A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure eliminates the deconvolution calculations and the related assumption-regarding type and direction of earthquake waves-required in the direct method. (Auth.)

  19. Method and apparatus for controlling a powertrain system including a multi-mode transmission

    Science.gov (United States)

    Hessell, Steven M.; Morris, Robert L.; McGrogan, Sean W.; Heap, Anthony H.; Mendoza, Gil J.

    2015-09-08

    A powertrain including an engine and torque machines is configured to transfer torque through a multi-mode transmission to an output member. A method for controlling the powertrain includes employing a closed-loop speed control system to control torque commands for the torque machines in response to a desired input speed. Upon approaching a power limit of a power storage device transferring power to the torque machines, power limited torque commands are determined for the torque machines in response to the power limit and the closed-loop speed control system is employed to determine an engine torque command in response to the desired input speed and the power limited torque commands for the torque machines.

  20. Near-infrared radiation curable multilayer coating systems and methods for applying same

    Science.gov (United States)

    Bowman, Mark P; Verdun, Shelley D; Post, Gordon L

    2015-04-28

    Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.

  1. Reliability and limitation of various diagnostic methods including nuclear medicine in myocardial disease

    International Nuclear Information System (INIS)

    Tokuyasu, Yoshiki; Kusakabe, Kiyoko; Yamazaki, Toshio

    1981-01-01

    Electrocardiography (ECG), echocardiography, nuclear method, cardiac catheterization, left ventriculography and endomyocardial biopsy (biopsy) were performed in 40 cases of cardiomyopathy (CM), 9 of endocardial fibroelastosis and 19 of specific heart muscle disease, and the usefulness and limitation of each method was comparatively estimated. In CM, various methods including biopsy were performed. The 40 patients were classified into 3 groups, i.e., hypertrophic (17), dilated (20) and non-hypertrophic.non-dilated (3) on the basis of left ventricular ejection fraction and hypertrophy of the ventricular wall. The hypertrophic group was divided into 4 subgroups: 9 septal, 4 apical, 2 posterior and 2 anterior. The nuclear study is useful in assessing the site of the abnormal ventricular thickening, perfusion defect and ventricular function. Echocardiography is most useful in detecting asymmetric septal hypertrophy. The biopsy gives the sole diagnostic clue, especially in non-hypertrophic.non-dilated cardiomyopathy. ECG is useful in all cases but correlation with the site of disproportional hypertrophy was not obtained. (J.P.N.)

  2. A method for including external feed in depletion calculations with CRAM and implementation into ORIGEN

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Wieselquist, W.A.

    2015-01-01

    Highlights: • A method for handling external feed in depletion calculations with CRAM. • Source term can have polynomial or exponentially decaying time-dependence. • CRAM with source term and adjoint capability implemented to ORIGEN in SCALE. • The new solver is faster and more accurate than the original solver of ORIGEN. - Abstract: A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Furthermore, in most cases, the new solver is up to several times faster due to not requiring similar substepping as the original one

  3. Improved Riccati Transfer Matrix Method for Free Vibration of Non-Cylindrical Helical Springs Including Warping

    Directory of Open Access Journals (Sweden)

    A.M. Yu

    2012-01-01

    Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.

  4. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    Science.gov (United States)

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  5. A robust two-node, 13 moment quadrature method of moments for dilute particle flows including wall bouncing

    Science.gov (United States)

    Sun, Dan; Garmory, Andrew; Page, Gary J.

    2017-02-01

    For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.

  6. Engine including hydraulically actuated valvetrain and method of valve overlap control

    Science.gov (United States)

    Cowgill, Joel [White Lake, MI

    2012-05-08

    An exhaust valve control method may include displacing an exhaust valve in communication with the combustion chamber of an engine to an open position using a hydraulic exhaust valve actuation system and returning the exhaust valve to a closed position using the hydraulic exhaust valve actuation assembly. During closing, the exhaust valve may be displaced for a first duration from the open position to an intermediate closing position at a first velocity by operating the hydraulic exhaust valve actuation assembly in a first mode. The exhaust valve may be displaced for a second duration greater than the first duration from the intermediate closing position to a fully closed position at a second velocity at least eighty percent less than the first velocity by operating the hydraulic exhaust valve actuation assembly in a second mode.

  7. Flexible barrier film, method of forming same, and organic electronic device including same

    Science.gov (United States)

    Blizzard, John; Tonge, James Steven; Weidner, William Kenneth

    2013-03-26

    A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.

  8. Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de

    2003-01-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  9. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  10. Numerical consideration for multiscale statistical process control method applied to nuclear material accountancy

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Hori, Masato; Asou, Ryoji; Usuda, Shigekazu

    2006-01-01

    The multiscale statistical process control (MSSPC) method is applied to clarify the elements of material unaccounted for (MUF) in large scale reprocessing plants using numerical calculations. Continuous wavelet functions are used to decompose the process data, which simulate batch operation superimposed by various types of disturbance, and the disturbance components included in the data are divided into time and frequency spaces. The diagnosis of MSSPC is applied to distinguish abnormal events from the process data and shows how to detect abrupt and protracted diversions using principle component analysis. Quantitative performance of MSSPC for the time series data is shown with average run lengths given by Monte-Carlo simulation to compare to the non-detection probability β. Recent discussion about bias corrections in material balances is introduced and another approach is presented to evaluate MUF without assuming the measurement error model. (author)

  11. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    Science.gov (United States)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  12. Zirconium-based alloys, nuclear fuel rods and nuclear reactors including such alloys, and related methods

    Science.gov (United States)

    Mariani, Robert Dominick

    2014-09-09

    Zirconium-based metal alloy compositions comprise zirconium, a first additive in which the permeability of hydrogen decreases with increasing temperatures at least over a temperature range extending from 350.degree. C. to 750.degree. C., and a second additive having a solubility in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. At least one of a solubility of the first additive in the second additive over the temperature range extending from 350.degree. C. to 750.degree. C. and a solubility of the second additive in the first additive over the temperature range extending from 350.degree. C. to 750.degree. C. is higher than the solubility of the second additive in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. Nuclear fuel rods include a cladding material comprising such metal alloy compositions, and nuclear reactors include such fuel rods. Methods are used to fabricate such zirconium-based metal alloy compositions.

  13. Second-principles method for materials simulations including electron and lattice degrees of freedom

    Science.gov (United States)

    García-Fernández, Pablo; Wojdeł, Jacek C.; Íñiguez, Jorge; Junquera, Javier

    2016-05-01

    We present a first-principles-based (second-principles) scheme that permits large-scale materials simulations including both atomic and electronic degrees of freedom on the same footing. The method is based on a predictive quantum-mechanical theory—e.g., density functional theory—and its accuracy can be systematically improved at a very modest computational cost. Our approach is based on dividing the electron density of the system into a reference part—typically corresponding to the system's neutral, geometry-dependent ground state—and a deformation part—defined as the difference between the actual and reference densities. We then take advantage of the fact that the bulk part of the system's energy depends on the reference density alone; this part can be efficiently and accurately described by a force field, thus avoiding explicit consideration of the electrons. Then, the effects associated to the difference density can be treated perturbatively with good precision by working in a suitably chosen Wannier function basis. Further, the electronic model can be restricted to the bands of interest. All these features combined yield a very flexible and computationally very efficient scheme. Here we present the basic formulation of this approach, as well as a practical strategy to compute model parameters for realistic materials. We illustrate the accuracy and scope of the proposed method with two case studies, namely, the relative stability of various spin arrangements in NiO (featuring complex magnetic interactions in a strongly-correlated oxide) and the formation of a two-dimensional electron gas at the interface between band insulators LaAlO3 and SrTiO3 (featuring subtle electron-lattice couplings and screening effects). We conclude by discussing ways to overcome the limitations of the present approach (most notably, the assumption of a fixed bonding topology), as well as its many envisioned possibilities and future extensions.

  14. Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft

    Science.gov (United States)

    Ross, J. C.

    1984-01-01

    The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.

  15. Methods of forming aluminum oxynitride-comprising bodies, including methods of forming a sheet of transparent armor

    Science.gov (United States)

    Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID

    2008-12-02

    The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.

  16. Method of fabricating electrodes including high-capacity, binder-free anodes for lithium-ion batteries

    Science.gov (United States)

    Ban, Chunmei; Wu, Zhuangchun; Dillon, Anne C.

    2017-01-10

    An electrode (110) is provided that may be used in an electrochemical device (100) such as an energy storage/discharge device, e.g., a lithium-ion battery, or an electrochromic device, e.g., a smart window. Hydrothermal techniques and vacuum filtration methods were applied to fabricate the electrode (110). The electrode (110) includes an active portion (140) that is made up of electrochemically active nanoparticles, with one embodiment utilizing 3d-transition metal oxides to provide the electrochemical capacity of the electrode (110). The active material (140) may include other electrochemical materials, such as silicon, tin, lithium manganese oxide, and lithium iron phosphate. The electrode (110) also includes a matrix or net (170) of electrically conductive nanomaterial that acts to connect and/or bind the active nanoparticles (140) such that no binder material is required in the electrode (110), which allows more active materials (140) to be included to improve energy density and other desirable characteristics of the electrode. The matrix material (170) may take the form of carbon nanotubes, such as single-wall, double-wall, and/or multi-wall nanotubes, and be provided as about 2 to 30 percent weight of the electrode (110) with the rest being the active material (140).

  17. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  18. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2016-02-01

    Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  19. Boundary element methods applied to two-dimensional neutron diffusion problems

    International Nuclear Information System (INIS)

    Itagaki, Masafumi

    1985-01-01

    The Boundary element method (BEM) has been applied to two-dimensional neutron diffusion problems. The boundary integral equation and its discretized form have been derived. Some numerical techniques have been developed, which can be applied to critical and fixed-source problems including multi-region ones. Two types of test programs have been developed according to whether the 'zero-determinant search' or the 'source iteration' technique is adopted for criticality search. Both programs require only the fluxes and currents on boundaries as the unknown variables. The former allows a reduction in computing time and memory in comparison with the finite element method (FEM). The latter is not always efficient in terms of computing time due to the domain integral related to the inhomogeneous source term; however, this domain integral can be replaced by the equivalent boundary integral for a region with a non-multiplying medium or with a uniform source, resulting in a significant reduction in computing time. The BEM, as well as the FEM, is well suited for solving irregular geometrical problems for which the finite difference method (FDM) is unsuited. The BEM also solves problems with infinite domains, which cannot be solved by the ordinary FEM and FDM. Some simple test calculations are made to compare the BEM with the FEM and FDM, and discussions are made concerning the relative merits of the BEM and problems requiring future solution. (author)

  20. Methodical basis of training of cadets for the military applied heptathlon competitions

    Directory of Open Access Journals (Sweden)

    R.V. Anatskyi

    2017-12-01

    Full Text Available The purpose of the research is to develop methodical bases of training of cadets for the military applied heptathlon competitions. Material and methods: Cadets of 2-3 courses at the age of 19-20 years (n=20 participated in researches. Cadets were selected by the best results of exercises performing included into the program of military applied heptathlon competitions (100 m run, 50 m freestyle swimming, Kalashnikov rifle shooting, pull-up, obstacle course, grenade throwing, 3000 m run. Preparation took place on the basis of training center. All trainings were organized and carried out according to the methodical basics: in a week preparation microcycle five days cadets had two trainings a day (on Saturday was one training, on Sunday they had rest. The selected exercises with individual loads were performed, Results : Sport scores demonstrated top results in the performance of 100 m run, 3000 m run and pull-up. The indices of performing exercise "obstacle course" were much lower than expected. Rather low results were demonstrated in swimming and shooting. Conclusions . Results of researches indicate the necessity of quality improvement: cadets’ weapons proficiency; physical readiness to perform the exercises requiring complex demonstration of all physical qualities.

  1. Applied Electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Yamashita, H; Marinova, I; Cingoski, V [eds.

    2002-07-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics.

  2. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  3. Nutrient Runoff Losses from Liquid Dairy Manure Applied with Low-Disturbance Methods.

    Science.gov (United States)

    Jokela, William; Sherman, Jessica; Cavadini, Jason

    2016-09-01

    Manure applied to cropland is a source of phosphorus (P) and nitrogen (N) in surface runoff and can contribute to impairment of surface waters. Tillage immediately after application incorporates manure into the soil, which may reduce nutrient loss in runoff as well as N loss via NH volatilization. However, tillage also incorporates crop residue, which reduces surface cover and may increase erosion potential. We applied liquid dairy manure in a silage corn ( L.)-cereal rye ( L.) cover crop system in late October using methods designed to incorporate manure with minimal soil and residue disturbance. These include strip-till injection and tine aerator-band manure application, which were compared with standard broadcast application, either incorporated with a disk or left on the surface. Runoff was generated with a portable rainfall simulator (42 mm h for 30 min) three separate times: (i) 2 to 5 d after the October manure application, (ii) in early spring, and (iii) after tillage and planting. In the postmanure application runoff, the highest losses of total P and dissolved reactive P were from surface-applied manure. Dissolved P loss was reduced 98% by strip-till injection; this result was not statistically different from the no-manure control. Reductions from the aerator band method and disk incorporation were 53 and 80%, respectively. Total P losses followed a similar pattern, with 87% reduction from injected manure. Runoff losses of N had generally similar patterns to those of P. Losses of P and N were, in most cases, lower in the spring rain simulations with fewer significant treatment effects. Overall, results show that low-disturbance manure application methods can significantly reduce nutrient runoff losses compared with surface application while maintaining residue cover better than incorporation by tillage. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. Applications of the conjugate gradient FFT method in scattering and radiation including simulations with impedance boundary conditions

    Science.gov (United States)

    Barkeshli, Kasra; Volakis, John L.

    1991-01-01

    The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.

  5. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    Science.gov (United States)

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  6. Implementation aspects of the Boundary Element Method including viscous and thermal losses

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Juhl, Peter Møller

    2014-01-01

    The implementation of viscous and thermal losses using the Boundary Element Method (BEM) is based on the Kirchhoff’s dispersion relation and has been tested in previous work using analytical test cases and comparison with measurements. Numerical methods that can simulate sound fields in fluids...

  7. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    Science.gov (United States)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  8. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    Science.gov (United States)

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  9. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    Science.gov (United States)

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  10. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    Directory of Open Access Journals (Sweden)

    Erin O Sills

    Full Text Available Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012. This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and

  11. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    Science.gov (United States)

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  12. Turbomachine combustor nozzle including a monolithic nozzle component and method of forming the same

    Science.gov (United States)

    Stoia, Lucas John; Melton, Patrick Benedict; Johnson, Thomas Edward; Stevenson, Christian Xavier; Vanselow, John Drake; Westmoreland, James Harold

    2016-02-23

    A turbomachine combustor nozzle includes a monolithic nozzle component having a plate element and a plurality of nozzle elements. Each of the plurality of nozzle elements includes a first end extending from the plate element to a second end. The plate element and plurality of nozzle elements are formed as a unitary component. A plate member is joined with the nozzle component. The plate member includes an outer edge that defines first and second surfaces and a plurality of openings extending between the first and second surfaces. The plurality of openings are configured and disposed to register with and receive the second end of corresponding ones of the plurality of nozzle elements.

  13. EVALUATION OF METHODS FOR ESTIMATING FATIGUE PROPERTIES APPLIED TO STAINLESS STEELS AND ALUMINUM ALLOYS

    Directory of Open Access Journals (Sweden)

    Taylor Mac Intyer Fonseca Junior

    2013-12-01

    Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.

  14. Analysis of coupled neutron-gamma radiations, applied to shieldings in multigroup albedo method

    International Nuclear Information System (INIS)

    Dunley, Leonardo Souza

    2002-01-01

    The principal mathematical tools frequently available for calculations in Nuclear Engineering, including coupled neutron-gamma radiations shielding problems, involve the full Transport Theory or the Monte Carlo techniques. The Multigroup Albedo Method applied to shieldings is characterized by following the radiations through distinct layers of materials, allowing the determination of the neutron and gamma fractions reflected from, transmitted through and absorbed in the irradiated media when a neutronic stream hits the first layer of material, independently of flux calculations. Then, the method is a complementary tool of great didactic value due to its clarity and simplicity in solving neutron and/or gamma shielding problems. The outstanding results achieved in previous works motivated the elaboration and the development of this study that is presented in this dissertation. The radiation balance resulting from the incidence of a neutronic stream into a shielding composed by 'm' non-multiplying slab layers for neutrons was determined by the Albedo method, considering 'n' energy groups for neutrons and 'g' energy groups for gammas. It was taken into account there is no upscattering of neutrons and gammas. However, it was considered that neutrons from any energy groups are able to produce gammas of all energy groups. The ANISN code, for an angular quadrature order S 2 , was used as a standard for comparison of the results obtained by the Albedo method. So, it was necessary to choose an identical system configuration, both for ANISN and Albedo methods. This configuration was six neutron energy groups and eight gamma energy groups, using three slab layers (iron aluminum - manganese). The excellent results expressed in comparative tables show great agreement between the values determined by the deterministic code adopted as standard and, the values determined by the computational program created using the Albedo method and the algorithm developed for coupled neutron

  15. Method of extruding and packaging a thin sample of reactive material including forming the extrusion die

    International Nuclear Information System (INIS)

    Lewandowski, E.F.; Peterson, L.L.

    1985-01-01

    This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life

  16. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  17. An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy

    Science.gov (United States)

    Gamso, Nancy M.

    2011-01-01

    The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…

  18. Method for including detailed evaluation of daylight levels in Be06

    DEFF Research Database (Denmark)

    Petersen, Steffen

    2008-01-01

    Good daylight conditions in office buildings have become an important issue due to new European regulatory demands which include energy consumption for electrical lighting in the building energy frame. Good daylight conditions in offices are thus in increased focus as an energy conserving measure....... In order to evaluate whether a certain design is good daylight design or not building designers must perform detailed evaluation of daylight levels, including the daylight performance of dynamic solar shadings, and include these in the energy performance evaluation. However, the mandatory national...... calculation tool in Denmark (Be06) for evaluating the energy performance of buildings is currently using a simple representation of available daylight in a room and simple assumptions regarding the control of shading devices. In a case example, this is leading to an overestimation of the energy consumption...

  19. Development of Extended Ray-tracing method including diffraction, polarization and wave decay effects

    Science.gov (United States)

    Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru

    2017-10-01

    Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.

  20. Indication of Importance of Including Soil Microbial Characteristics into Biotope Valuation Method.

    Czech Academy of Sciences Publication Activity Database

    Trögl, J.; Pavlorková, Jana; Packová, P.; Seják, J.; Kuráň, P.; Kuráň, J.; Popelka, J.; Pacina, J.

    2016-01-01

    Roč. 8, č. 3 (2016), č. článku 253. ISSN 2071-1050 Institutional support: RVO:67985858 Keywords : biotope assessment * biotope valuation method * soil microbial communities Subject RIV: DJ - Water Pollution ; Quality Impact factor: 1.789, year: 2016

  1. Complementary variational principle method applied to thermal conductivities of a plasma in a uniform magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Sehgal, A K; Gupta, S C [Punjabi Univ., Patiala (India). Dept. of Physics

    1982-12-14

    The complementary variational principles method (CVP) is applied to the thermal conductivities of a plasma in a uniform magnetic field. The results of computations show that the CVP derived results are very useful.

  2. Method of preparing a negative electrode including lithium alloy for use within a secondary electrochemical cell

    Science.gov (United States)

    Tomczuk, Zygmunt; Olszanski, Theodore W.; Battles, James E.

    1977-03-08

    A negative electrode that includes a lithium alloy as active material is prepared by briefly submerging a porous, electrically conductive substrate within a melt of the alloy. Prior to solidification, excess melt can be removed by vibrating or otherwise manipulating the filled substrate to expose interstitial surfaces. Electrodes of such as solid lithium-aluminum filled within a substrate of metal foam are provided.

  3. Thick electrodes including nanoparticles having electroactive materials and methods of making same

    Science.gov (United States)

    Xiao, Jie; Lu, Dongping; Liu, Jun; Zhang, Jiguang; Graff, Gordon L.

    2017-02-21

    Electrodes having nanostructure and/or utilizing nanoparticles of active materials and having high mass loadings of the active materials can be made to be physically robust and free of cracks and pinholes. The electrodes include nanoparticles having electroactive material, which nanoparticles are aggregated with carbon into larger secondary particles. The secondary particles can be bound with a binder to form the electrode.

  4. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  5. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  6. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  7. Method for pulse control in a laser including a stimulated brillouin scattering mirror system

    Science.gov (United States)

    Dane, C. Brent; Hackel, Lloyd; Harris, Fritz B.

    2007-10-23

    A laser system, such as a master oscillator/power amplifier system, comprises a gain medium and a stimulated Brillouin scattering SBS mirror system. The SBS mirror system includes an in situ filtered SBS medium that comprises a compound having a small negative non-linear index of refraction, such as a perfluoro compound. An SBS relay telescope having a telescope focal point includes a baffle at the telescope focal point which blocks off angle beams. A beam splitter is placed between the SBS mirror system and the SBS relay telescope, directing a fraction of the beam to an alternate beam path for an alignment fiducial. The SBS mirror system has a collimated SBS cell and a focused SBS cell. An adjustable attenuator is placed between the collimated SBS cell and the focused SBS cell, by which pulse width of the reflected beam can be adjusted.

  8. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  9. Wielandt method applied to the diffusion equations discretized by finite element nodal methods

    International Nuclear Information System (INIS)

    Mugica R, A.; Valle G, E. del

    2003-01-01

    Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)

  10. Impact of Including Authentic Inquiry Experiences in Methods Courses for Pre-Service Secondary Teachers

    Science.gov (United States)

    Slater, T. F.; Elfring, L.; Novodvorsky, I.; Talanquer, V.; Quintenz, J.

    2007-12-01

    Science education reform documents universally call for students to have authentic and meaningful experiences using real data in the context of their science education. The underlying philosophical position is that students analyzing data can have experiences that mimic actual research. In short, research experiences that reflect the scientific spirit of inquiry potentially can: prepare students to address real world complex problems; develop students' ability to use scientific methods; prepare students to critically evaluate the validity of data or evidence and of the consequent interpretations or conclusions; teach quantitative skills, technical methods, and scientific concepts; increase verbal, written, and graphical communication skills; and train students in the values and ethics of working with scientific data. However, it is unclear what the broader pre-service teacher preparation community is doing in preparing future teachers to promote, manage, and successful facilitate their own students in conducting authentic scientific inquiry. Surveys of undergraduates in secondary science education programs suggests that students have had almost no experiences themselves in conducting open scientific inquiry where they develop researchable questions, design strategies to pursue evidence, and communicate data-based conclusions. In response, the College of Science Teacher Preparation Program at the University of Arizona requires all students enrolled in its various science teaching methods courses to complete an open inquiry research project and defend their findings at a specially designed inquiry science mini-conference at the end of the term. End-of-term surveys show that students enjoy their research experience and believe that this experience enhances their ability to facilitate their own future students in conducting open inquiry.

  11. What is the method in applying formal methods to PLC applications?

    NARCIS (Netherlands)

    Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.

    2000-01-01

    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system

  12. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  13. Use of the potentiometric titration method to investigate heterogeneous systems including phosphorylated complexones

    International Nuclear Information System (INIS)

    Tereshin, G.S.; Kharitonova, L.K.; Kuznetsova, O.B.

    1979-01-01

    Heterogeneous systems Y(NO 3 ) 3 (YCl 3 )-Hsub(n)L-KNO 3 (KCl)-H 2 O are investigated by potentiometric titration (with coulomb-meter generation of oH - ions). Hsub(n)L is one of the following: oxyethylidendiphosphonic; aminobenzilidendiphosphonic; glycine-bis-methyl-phosphonic; nitrilotrimethylphosphonic (H 6 L) and ethylenediaminetetramethylphosphonic acids. The range of the exsistence of YHsub(nL3)LxyH 2 O has been determined. The possibility of using potentiometric titration for investigating heterogeneous systems is demonstrated by the stUdy of the system Y(NO 3 ) 3 -H 6 L-KOH-H 2 o by the method of residual concentration. The two methods have shown that at pH 3 LxyH 2 O; at pH=6, KYH 2 Lxy'H 2 O, and at pH=7, K 2 YHLxy''H 2 O. The complete solubility products of nitrilotrimethylphosphonates are evaluated

  14. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  15. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  16. Decoding Facial Esthetics to Recreate an Esthetic Hairline: A Method Which Includes Forehead Curvature.

    Science.gov (United States)

    Garg, Anil K; Garg, Seema

    2017-01-01

    The evidence suggests that our perception of physical beauty is based on how closely the features of one's face reflect phi (the golden ratio) in their proportions. By that extension, it must certainly be possible to use a mathematical parameter to design an anterior hairline in all faces. To establish a user-friendly method to design an anterior hairline in cases of male pattern alopecia. We need a flexible measuring tape and skin marker. A reference point A (glabella) is taken in between eyebrows. Mark point E, near the lateral canthus, 8 cm horizontal on either side from the central point A. A mid-frontal point (point B) is marked 8 cm from point A on the forehead in a mid-vertical plane. The frontotemporal points (C and C') are marked on the frontotemporal area, 8 cm in a horizontal plane from point B and 8 cm in a vertical plane from point E. The temporal peak points (D and D') are marked on the line joining the frontotemporal point C to the lateral canthus point E, slightly more than halfway toward lateral canthus, usually 5 cm from the frontotemporal point C. This line makes an anterior border of the temporal triangle. We have conducted a study with 431 cases of male pattern alopecia. The average distance of the mid-frontal point from glabella was 7.9 cm. The patient satisfaction reported was 94.7%. Our method gives a skeletal frame of the anterior hairline with minimal criteria, with no need of visual imagination and experience of the surgeon. It automatically takes care of the curvature of the forehead and is easy to use for a novice surgeon.

  17. Consensus for nonmelanoma skin cancer treatment: basal cell carcinoma, including a cost analysis of treatment methods.

    Science.gov (United States)

    Kauvar, Arielle N B; Cronin, Terrence; Roenigk, Randall; Hruza, George; Bennett, Richard

    2015-05-01

    Basal cell carcinoma (BCC) is the most common cancer in the US population affecting approximately 2.8 million people per year. Basal cell carcinomas are usually slow-growing and rarely metastasize, but they do cause localized tissue destruction, compromised function, and cosmetic disfigurement. To provide clinicians with guidelines for the management of BCC based on evidence from a comprehensive literature review, and consensus among the authors. An extensive review of the medical literature was conducted to evaluate the optimal treatment methods for cutaneous BCC, taking into consideration cure rates, recurrence rates, aesthetic and functional outcomes, and cost-effectiveness of the procedures. Surgical approaches provide the best outcomes for BCCs. Mohs micrographic surgery provides the highest cure rates while maximizing tissue preservation, maintenance of function, and cosmesis. Mohs micrographic surgery is an efficient and cost-effective procedure and remains the treatment of choice for high-risk BCCs and for those in cosmetically sensitive locations. Nonsurgical modalities may be used for low-risk BCCs when surgery is contraindicated or impractical, but the cure rates are lower.

  18. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery

    International Nuclear Information System (INIS)

    Ustinov, A; Khayrullina, A; Khmelik, M; Sveshnikova, A; Borzenko, V

    2016-01-01

    Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia. (paper)

  19. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery

    Science.gov (United States)

    Ustinov, A.; Khayrullina, A.; Borzenko, V.; Khmelik, M.; Sveshnikova, A.

    2016-09-01

    Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia.

  20. A new clamp method for firing bricks | Obeng | Journal of Applied ...

    African Journals Online (AJOL)

    A new clamp method for firing bricks. ... Journal of Applied Science and Technology ... To overcome this operational deficiencies, a new method of firing bricks that uses brick clamp technique that incorporates a clamp wall of 60 cm thickness, a six tier approach of sealing the top of the clamp (by combination of green bricks) ...

  1. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  2. Determination methods for plutonium as applied in the field of reprocessing

    International Nuclear Information System (INIS)

    1983-07-01

    The papers presented report on Pu-determination methods, which are routinely applied in process control, and also on new developments which could supercede current methods either because they are more accurate or because they are simpler and faster. (orig./DG) [de

  3. Water Permeability of Pervious Concrete Is Dependent on the Applied Pressure and Testing Methods

    Directory of Open Access Journals (Sweden)

    Yinghong Qin

    2015-01-01

    Full Text Available Falling head method (FHM and constant head method (CHM are, respectively, used to test the water permeability of permeable concrete, using different water heads on the testing samples. The results indicate the apparent permeability of pervious concrete decreasing with the applied water head. The results also demonstrate the permeability measured from the FHM is lower than that from the CHM. The fundamental difference between the CHM and FHM is examined from the theory of fluid flowing through porous media. The testing results suggest that the water permeability of permeable concrete should be reported with the applied pressure and the associated testing method.

  4. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    Science.gov (United States)

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  5. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Javier Cubas

    2015-01-01

    Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  6. Particle generation methods applied in large-scale experiments on aerosol behaviour and source term studies

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.

    1997-01-01

    In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)

  7. Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites

    Science.gov (United States)

    Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.

    2017-12-01

    How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.

  8. Design and fabrication of facial prostheses for cancer patient applying computer aided method and manufacturing (CADCAM)

    Science.gov (United States)

    Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana

    2016-12-01

    Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.

  9. Diamond difference method with hybrid angular quadrature applied to neutron transport problems

    International Nuclear Information System (INIS)

    Zani, Jose H.; Barros, Ricardo C.; Alves Filho, Hermes

    2005-01-01

    In this work we presents the results for the calculations of the disadvantage factor in thermal nuclear reactor physics. We use the one-group discrete ordinates (S N ) equations to mathematically model the flux distributions in slab lattices. We apply the diamond difference method with source iteration iterative scheme to numerically solve the discretized systems equations. We used special interface conditions to describe the method with hybrid angular quadrature. We show numerical results to illustrate the accuracy of the hybrid method. (author)

  10. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    Science.gov (United States)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  11. Perspective for applying traditional and inovative teaching and learning methods to nurses continuing education

    OpenAIRE

    Bendinskaitė, Irmina

    2015-01-01

    Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...

  12. Cluster detection methods applied to the Upper Cape Cod cancer data

    Directory of Open Access Journals (Sweden)

    Ozonoff David

    2005-09-01

    Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  13. Apparatus and method for applying an end plug to a fuel rod tube end

    International Nuclear Information System (INIS)

    Rieben, S.L.; Wylie, M.E.

    1987-01-01

    An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described

  14. Method of levelized discounted costs applied in economic evaluation of nuclear power plant project

    International Nuclear Information System (INIS)

    Tian Li; Wang Yongqing; Liu Jingquan; Guo Jilin; Liu Wei

    2000-01-01

    The main methods of economic evaluation of bid which are in common use are introduced. The characteristics of levelized discounted cost method and its application are presented. The method of levelized discounted cost is applied to the cost calculation of a 200 MW nuclear heating reactor economic evaluation. The results indicate that the method of levelized discounted costs is simple, feasible and which is considered most suitable for the economic evaluation of various case. The method is suggested which is used in the national economic evaluation

  15. Local regression type methods applied to the study of geophysics and high frequency financial data

    Science.gov (United States)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  16. Additivity methods for prediction of thermochemical properties. The Laidler method revisited. 2. Hydrocarbons including substituted cyclic compounds

    International Nuclear Information System (INIS)

    Santos, Rui C.; Leal, Joao P.; Martinho Simoes, Jose A.

    2009-01-01

    A revised parameterization of the extended Laidler method for predicting standard molar enthalpies of atomization and standard molar enthalpies of formation at T = 298.15 K for several families of hydrocarbons (alkanes, alkenes, alkynes, polyenes, poly-ynes, cycloalkanes, substituted cycloalkanes, cycloalkenes, substituted cycloalkenes, benzene derivatives, and bi and polyphenyls) is presented. Data for a total of 265 gas-phase and 242 liquid-phase compounds were used for the calculation of the parameters. Comparison of the experimental values with those obtained using the additive scheme led to an average absolute difference of 0.73 kJ . mol -1 for the gas-phase standard molar enthalpy of formation and 0.79 kJ . mol -1 for the liquid-phase standard molar enthalpy of formation. The database used to establish the parameters was carefully reviewed by using, whenever possible, the original publications. A worksheet to simplify the calculation of standard molar enthalpies of formation and standard molar enthalpies of atomization at T = 298.15 K based on the extended Laidler parameters defined in this paper is provided as supplementary material.

  17. Method to detect substances in a body and device to apply the method

    International Nuclear Information System (INIS)

    Voigt, H.

    1978-01-01

    The method and the measuring disposition serve to localize pellets doped with Gd 2 O 3 , lying between UO 2 pellets within a reactor fuel rod. The fuel rod is penetrating a homogeneous magnetic field generated between two pole shoes. The magnetic stray field caused by the doping substances is then measured by means of Hall probes (e.g. InAs) for quantitative discrimination from UO 2 . The position of the Gd 2 O 3 -doped pellets is determined by moving the fuel rod through the magnetic field in a direction perpendicular to the homogeneous field. The measuring signal is caused by the different susceptibility of Gd 2 O 3 with respect to UO 2 . (DG) [de

  18. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method

    International Nuclear Information System (INIS)

    Terra, Andre Miguel Barge Pontes Torres

    2005-01-01

    The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)

  19. Applying electric field to charged and polar particles between metallic plates: extension of the Ewald method.

    Science.gov (United States)

    Takae, Kyohei; Onuki, Akira

    2013-09-28

    We develop an efficient Ewald method of molecular dynamics simulation for calculating the electrostatic interactions among charged and polar particles between parallel metallic plates, where we may apply an electric field with an arbitrary size. We use the fact that the potential from the surface charges is equivalent to the sum of those from image charges and dipoles located outside the cell. We present simulation results on boundary effects of charged and polar fluids, formation of ionic crystals, and formation of dipole chains, where the applied field and the image interaction are crucial. For polar fluids, we find a large deviation of the classical Lorentz-field relation between the local field and the applied field due to pair correlations along the applied field. As general aspects, we clarify the difference between the potential-fixed and the charge-fixed boundary conditions and examine the relationship between the discrete particle description and the continuum electrostatics.

  20. Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research

    Science.gov (United States)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2014-05-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.

  1. A mixed methods evaluation of team-based learning for applied pathophysiology in undergraduate nursing education.

    Science.gov (United States)

    Branney, Jonathan; Priego-Hernández, Jacqueline

    2018-02-01

    It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those

  2. A nuclear-medical method applied for determining the choledochus diameter after cholecystectomy

    International Nuclear Information System (INIS)

    Wolf, M.

    1980-01-01

    54 patients (46 of them females, 8 males) who underwent cholecystectomy at least 4 years ago, were followed up roentgenologically by infusion cholangiography and nuclear-medicinally by quantitative hepatobiliary functional scintiscanning (HBFS). The ROI method applied for HBFS permits to record time/activity curves above the liver parenchyma (A) and the porta of the liver (B). By substracting curve A of curve B with the scale in which A is incorporated in B, a curve B' results, indicating the flow volume through the porta of the liver. The quotient Q=maximum pulse A to B/maximum pulse B to B indicates the portion of the liver parenchyma in the porta curve. The quotient represents a measure for the total volume of the large bile ducts included in the region of the porta of the liver. The quantity 1-Q/Q was put in relation with the roentgenologically determined common bile duct diameters. It resulted that both quantities correlated well, with a correlation coefficient of r=-0.860. Thus, the choledochus diameter can be determined in a primarily functional examination with a precision of 2 mm, a degree which permits the detection of clinically relevant discharge malfunctions. It was not possible to detect peristalsis-dependent phenomena with a dosage of 4-5 mCi 99 mTc-diethyl-IDA, an irradiation dose which was sufficient for answering the clinical questions and could be justified for the patients. (orig.) [de

  3. Applying terminological methods and description logic for creating and implementing and ontology on inhibition

    DEFF Research Database (Denmark)

    Zambach, Sine; Madsen, Bodil Nistrup

    2009-01-01

    By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...

  4. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  5. Applied probabilistic methods in the field of reactor safety in Germany

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1982-01-01

    Some aspects of applied reliability and risk analysis methods in nuclear safety and the present role of both in Germany, are discussed. First, some comments on the status and applications of reliability analysis are given. Second, some conclusions that can be drawn from previous work on the German Risk Study are summarized. (orig.)

  6. 21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...

  7. Splendor and misery of the distorted wave method applied to heavy ions transfer reactions

    International Nuclear Information System (INIS)

    Mermaz, M.C.

    1979-01-01

    The success and failure of the Distorted Wave Method (DWM) applied to heavy ion transfer reactions are illustrated by few examples: one and multi-nucleon transfer reactions induced by 15 N and 18 O on 28 Si target nucleus performed on the vicinity of Coulomb barrier respectively at 44 and 56 MeV incident energy

  8. A nodal method applied to a diffusion problem with generalized coefficients

    International Nuclear Information System (INIS)

    Laazizi, A.; Guessous, N.

    1999-01-01

    In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)

  9. Trends in Research Methods in Applied Linguistics: China and the West.

    Science.gov (United States)

    Yihong, Gao; Lichun, Li; Jun, Lu

    2001-01-01

    Examines and compares current trends in applied linguistics (AL) research methods in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…

  10. Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)

    Science.gov (United States)

    Earl B. Anderson; R. Stanton Hales

    1986-01-01

    The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...

  11. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  12. The Global Survey Method Applied to Ground-level Cosmic Ray Measurements

    Science.gov (United States)

    Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.

    2018-04-01

    The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.

  13. An overview on applied methods in the FRG to investigate human factors in control rooms of nuclear power plants

    International Nuclear Information System (INIS)

    Thomas, D.B.

    1985-01-01

    In the first half of 1984 a feasibility study was carried out with respect to the CSNI of the OECD/NEA inventory of methods for the analysis and evaluation of human factors in the control room of nuclear power plants. In order to enable an analysis of the methods to be made, an elementary categorization of the methods under field studies, laboratory studies and theoretical studies was performed. A further differentiation of these categories was used as the basis for a critical analysis and interpretation of the methods employed in the research plan. In the following sections, an explanation is given of the method categories used and the plans included in the investigation. A short representation is given of the breakdown of the applied methods into categories and an analysis is made of the results. Implications for research programs are discussed. (orig./GL) [de

  14. A Simple and Useful Method to Apply Exogenous NO Gas to Plant Systems: Bell Pepper Fruits as a Model.

    Science.gov (United States)

    Palma, José M; Ruiz, Carmelo; Corpas, Francisco J

    2018-01-01

    Nitric oxide (NO) is involved many physiological plant processes, including germination, growth and development of roots, flower setting and development, senescence, and fruit ripening. In the latter physiological process, NO has been reported to play an opposite role to ethylene. Thus, treatment of fruits with NO may lead to delay ripening independently of whether they are climacteric or nonclimacteric. In many cases different methods have been reported to apply NO to plant systems involving sodium nitroprusside, NONOates, DETANO, or GSNO to investigate physiological and molecular consequences. In this chapter a method to treat plant materials with NO is provided using bell pepper fruits as a model. This method is cheap, free of side effects, and easy to apply since it only requires common chemicals and tools available in any biology laboratory.

  15. Determination of activity of I-125 applying sum-peak methods

    International Nuclear Information System (INIS)

    Arbelo Penna, Y.; Hernandez Rivero, A.T.; Oropesa Verdecia, P.; Serra Aguila, R.; Moreno Leon, Y.

    2011-01-01

    The determination of activity of I-125 in radioactive solutions, applying sum-peak methods, by using an n-type HPGe detector of extended range is described. Two procedures were used for obtaining I-125 specific activity in solutions: a) an absolute method, which is independent of nuclear parameters and detector efficiency, and b) an option which consider constant the efficiency in the region of interest and involves calculations using nuclear parameters. The measurement geometries studied are specifically solid point sources. The relative deviations between specific activities, obtained by these different procedures are not higher than 1 %. Moreover, the activity of the radioactive solution was obtained by measuring it in NIST ampoule using a CAPINTEC CRC 35R dose calibrator. The consistency of obtained results, confirm the feasibility of applying direct methods of measurement for I-125 activity determinations, which allow us to achieve lower uncertainties in comparison with the relative methods of measurement. The establishment of these methods is aimed to be applied for the calibration of equipment and radionuclide dose calibrators used currently in clinical RIA/IRMA assays and Nuclear medicine practice respectively. (Author)

  16. An applied study using systems engineering methods to prioritize green systems options

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sonya M [Los Alamos National Laboratory; Macdonald, John M [Los Alamos National Laboratory

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.

  17. Economic consequences assessment for scenarios and actual accidents do the same methods apply

    International Nuclear Information System (INIS)

    Brenot, J.

    1991-01-01

    Methods for estimating the economic consequences of major technological accidents, and their corresponding computer codes, are briefly presented with emphasis on the basic choices. When applied to hypothetic scenarios, those methods give results that are of interest for risk managers with a decision aiding perspective. Simultaneously the various costs, and the procedures for their estimation are reviewed for some actual accidents (Three Mile Island, Chernobyl,..). These costs are used in a perspective of litigation and compensation. The comparison of the methods used and cost estimates obtained for scenarios and actual accidents shows the points of convergence and discrepancies that are discussed

  18. Non-invasive imaging methods applied to neo- and paleontological cephalopod research

    Science.gov (United States)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2013-11-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.

  19. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  20. Power System Oscillation Modes Identifications: Guidelines for Applying TLS-ESPRIT Method

    Science.gov (United States)

    Gajjar, Gopal R.; Soman, Shreevardhan

    2013-05-01

    Fast measurements of power system quantities available through wide-area measurement systems enables direct observations for power system electromechanical oscillations. But the raw observations data need to be processed to obtain the quantitative measures required to make any inference regarding the power system state. A detailed discussion is presented for the theory behind the general problem of oscillatory mode indentification. This paper presents some results on oscillation mode identification applied to a wide-area frequency measurements system. Guidelines for selection of parametes for obtaining most reliable results from the applied method are provided. Finally, some results on real measurements are presented with our inference on them.

  1. Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)

    1996-12-31

    The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.

  2. Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction

    Science.gov (United States)

    Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan

    2009-01-01

    Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and

  3. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  4. An implementation of the diagnosis method DYANA, applied to a combined heat-power device

    Energy Technology Data Exchange (ETDEWEB)

    Van der Neut, F.

    1993-10-01

    The development and implementation of the monitor-and-diagnosis method DYANA is presented. This implementation is applied to and tested on a combined heat and power generating device (CHP). The steps that have been taken in realizing this implementation are evaluated into detail . In chapter two the theory behind DYANA is recapitulated. Attention is paid to the basic theory of diagnoses, and the steps of the path from this theory to the algorithm DYANA are revealed. These steps include the hierarchical approach, and explain the following features of DYANA: a) the use of best-first dynamic model zooming based on heuristics with respect to parsimony of the number of components within the diagnoses, b) the use of consistency of fault models with observations to focus on the most likely diagnoses, and c) the use of online diagnosis: the current set of diagnoses is incrementally updated after a new observation of the system is made. In chapter three the relevant aspects of the system to be diagnosed, the CHP, are dealt with in detail. An explanation is given of the broad working of the CHP, its hierarchical structure and mathematical representation are given, CHP observation is commented, and some possible forms of fault models are stated. In chapter four the pseudocode of the implementation, developed for DYANA, is presented. The pseudocode consists of two parts: the monitoring process (using numerical simulation), and the diagnostic process. The differences between the pseudocode and the actual implementation are mentioned. The CHP will then be monitored and diagnosed with this algorithm and results of this test are given in chapter five. An actual implementation of DYANA can be found in a separately supplied appendix, the Programme Appendix. The implementation of the monitoring process is meant only for this example of the CHP. The code for the diagnostic process can be easily adjusted for diagnosing other devices, such as electronic circuits. The language is Pascal.

  5. Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability—A Literature Review

    Directory of Open Access Journals (Sweden)

    Eila Jeronen

    2016-12-01

    Full Text Available There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education in several scientific databases. The article provides an overview of 24 selected articles published in peer-reviewed scientific journals from 2006–2016. The data was analyzed using qualitative content analysis. Altogether, 16 journals were selected and 24 articles were analyzed in detail. The foci of the analyses were teaching methods, learning environments, knowledge and thinking skills, psychomotor skills, emotions and attitudes, and evaluation methods. Additionally, features of good methods were investigated and their implications for teaching were emphasized. In total, 22 different teaching methods were found to improve sustainability education in different ways. The most emphasized teaching methods were those in which students worked in groups and participated actively in learning processes. Research points toward the value of teaching methods that provide a good introduction and supportive guidelines and include active participation and interactivity.

  6. Applied ecosystem analysis - a primer; the ecosystem diagnosis and treatment method

    International Nuclear Information System (INIS)

    Lestelle, L.C.; Mobrand, L.E.; Lichatowich, J.A.; Vogel, T.S.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual

  7. An Ultrasonic Guided Wave Method to Estimate Applied Biaxial Loads (Preprint)

    Science.gov (United States)

    2011-11-01

    VALIDATION A fatigue test was performed with an array of six surface-bonded PZT transducers on a 6061 aluminum plate as shown in Figure 4. The specimen...direct paths of propagation are oriented at different angles. This method is applied to experimental sparse array data recorded during a fatigue test...and the additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of

  8. Accuracy of the Adomian decomposition method applied to the Lorenz system

    International Nuclear Information System (INIS)

    Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.

    2006-01-01

    In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one

  9. Applying the Delphi method to assess impacts of forest management on biodiversity and habitat preservation

    DEFF Research Database (Denmark)

    Filyushkina, Anna; Strange, Niels; Löf, Magnus

    2018-01-01

    This study applied a structured expert elicitation technique, the Delphi method, to identify the impacts of five forest management alternatives and several forest characteristics on the preservation of biodiversity and habitats in the boreal zone of the Nordic countries. The panel of experts...... as a valuable addition to on-going empirical and modeling efforts. The findings could assist forest managers in developing forest management strategies that generate benefits from timber production while taking into account the trade-offs with biodiversity goals....

  10. Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation

    Directory of Open Access Journals (Sweden)

    Vitanov Nikolay K.

    2018-03-01

    Full Text Available We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.

  11. Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation

    Science.gov (United States)

    Vitanov, Nikolay K.; Dimitrova, Zlatinka I.

    2018-03-01

    We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.

  12. Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.

    Energy Technology Data Exchange (ETDEWEB)

    Lestelle, Lawrence C.; Mobrand, Lars E.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.

  13. The LTSN method used in transport equation, applied in nuclear engineering problems

    International Nuclear Information System (INIS)

    Borges, Volnei; Vilhena, Marco Tulio de

    2002-01-01

    The LTS N method solves analytically the S N equations, applying the Laplace transform in the spatial variable. This methodology is used in determination of scalar flux for neutrons and photons, absorbed dose rate, buildup factors and power for a heterogeneous planar slab. This procedure leads to the solution of a transcendental equations for effective multiplication, critical thickness and the atomic density. In this work numerical results are reported, considering multigroup problem in heterogeneous slab. (author)

  14. Machine Learning Method Applied in Readout System of Superheated Droplet Detector

    Science.gov (United States)

    Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco

    2017-07-01

    Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.

  15. Development of a tracking method for augmented reality applied to nuclear plant maintenance work

    International Nuclear Information System (INIS)

    Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu

    2005-01-01

    In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance

  16. Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability--A Literature Review

    Science.gov (United States)

    Jeronen, Eila; Palmberg, Irmeli; Yli-Panula, Eija

    2017-01-01

    There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education…

  17. Method for assessment of stormwater treatment facilities - Synthetic road runoff addition including micro-pollutants and tracer.

    Science.gov (United States)

    Cederkvist, Karin; Jensen, Marina B; Holm, Peter E

    2017-08-01

    Stormwater treatment facilities (STFs) are becoming increasingly widespread but knowledge on their performance is limited. This is due to difficulties in obtaining representative samples during storm events and documenting removal of the broad range of contaminants found in stormwater runoff. This paper presents a method to evaluate STFs by addition of synthetic runoff with representative concentrations of contaminant species, including the use of tracer for correction of removal rates for losses not caused by the STF. A list of organic and inorganic contaminant species, including trace elements representative of runoff from roads is suggested, as well as relevant concentration ranges. The method was used for adding contaminants to three different STFs including a curbstone extension with filter soil, a dual porosity filter, and six different permeable pavements. Evaluation of the method showed that it is possible to add a well-defined mixture of contaminants despite different field conditions by having a flexibly system, mixing different stock-solutions on site, and use bromide tracer for correction of outlet concentrations. Bromide recovery ranged from only 12% in one of the permeable pavements to 97% in the dual porosity filter, stressing the importance of including a conservative tracer for correction of contaminant retention values. The method is considered useful in future treatment performance testing of STFs. The observed performance of the STFs is presented in coming papers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Applying the response matrix method for solving coupled neutron diffusion and transport problems

    International Nuclear Information System (INIS)

    Sibiya, G.S.

    1980-01-01

    The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de

  19. Method to integrate clinical guidelines into the electronic health record (EHR) by applying the archetypes approach.

    Science.gov (United States)

    Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro

    2013-01-01

    Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.

  20. Lessons learned applying CASE methods/tools to Ada software development projects

    Science.gov (United States)

    Blumberg, Maurice H.; Randall, Richard L.

    1993-01-01

    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.

  1. Principles of cobalt-60 teletherapy including an introduction to the compendium. Guidelines for the documentation of radiation treatment methods

    International Nuclear Information System (INIS)

    Cohen, M.

    1984-01-01

    A great deal of thought has been given in recent years to the documentation of individual patients and their diseases, especially since the computerization of registry sytems facilitates the storage and retrieval of large amounts of data, but the documentation of radiation treatment methods has received surprisingly little attention. The guidelines which follow are intended for use both internally (within radiotherapy centres) and externally when a treatment method is reported in the literature or transferred from one centre to another. The amount of detail reported externally will, of course, depend on the circumstances: for example, a published paper will usually mention only the most important of the radiation and physical parameters, but it is important for the department of origin to list all parameters in a separate document, available on request. These guidelines apply specifically to the documentation of treatment by external radiation beams, although many of the suggestions would also apply to treatment by small sealed sources (brachytherapy) and by unsealed radionuclides. Treatment techniques which involve a combination of external and internal sources (e.g. Ca. cervix uteri treatd by intracavitary sources plus external beam therapy) require particularly careful documentation to indicate the relationship bwtween dose distribution (in both space and time) achieved by the two modalities

  2. Applying some methods to process the data coming from the nuclear reactions

    International Nuclear Information System (INIS)

    Suleymanov, M.K.; Abdinov, O.B.; Belashev, B.Z.

    2010-01-01

    Full text : The methods of a posterior increasing the resolution of the spectral lines are offered to process the data coming from the nuclear reactions. The methods have applied to process the data coming from the nuclear reactions at high energies. They give possibilities to get more detail information on a structure of the spectra of particles emitted in the nuclear reactions. The nuclear reactions are main source of the information on the structure and physics of the atomic nuclei. Usually the spectrums of the fragments of the reactions are complex ones. Apparently it is not simple to extract the necessary for investigation information. In the talk we discuss the methods of a posterior increasing the resolution of the spectral lines. The methods could be useful to process the complex data coming from the nuclear reactions. We consider the Fourier transformation method and maximum entropy one. The complex structures were identified by the method. One can see that at lest two selected points are indicated by the method. Recent we presented a talk where we shown that the results of the analyzing the structure of the pseudorapidity spectra of charged relativistic particles with ≥ 0.7 measured in Au+Em and Pb+Em at AGS and SPS energies using the Fourier transformation method and maximum entropy one. The dependences of these spectra on the number of fast target protons were studied. These distribution shown visually some plateau and shoulder that was at least three selected points on the distributions. The plateaus become wider in PbEm reactions. The existing of plateau is necessary for the parton models. The maximum entropy method could confirm the existing of the plateau and the shoulder on the distributions. The figure shows the results of applying the maximum entropy method. One can see that the method indicates several clean selected points. Some of them same with observed visually ones. We would like to note that the Fourier transformation method could not

  3. Intestinal colic in newborn babies: incidence and methods of proceeding applied by parents

    Directory of Open Access Journals (Sweden)

    Anna Lewandowska

    2017-06-01

    Full Text Available Introduction: Intestinal colic is one of the more frequent complaints that a general practitioner and paediatrician deal with in their work. 10-40% of babies formula fed and 10-20% breast fed are stricken by this complaint. A colic attack appears suddenly and very quickly causes energetic, squeaky cry or even scream. Colic attacks last for a few minutes and appear every 2-3 hours usually in the evenings. Specialist literature provides numerous definitions of intestinal colic. The concept was introduced for the first time to paediatric textbooks over 250 years ago. One of the most accurate definitions describe colic as recurring attacks of intensive cry and anxiety lasting for more than 3 hours a day, 3 days a week within 3 weeks. Care of a baby suffering from an intestinal colic causes numerous problems and anxiety among parents, therefore knowledge of effective methods to combat this complaint is a challenge for contemporary neonatology and paediatrics. The aim of the study is to estimate the incidence of intestinal colic in newborn babies formula and breast fed as well as to assess methods of proceeding applied by parents and analyze their effectiveness. Material and methods: The research involved 100 newborn babies breast fed and 100 formula fed, and their parents. The research method applied in the study was a diagnostic survey conducted by use of a questionnaire method. Results: Among examined newborn babies that were breast fed, 43% have experienced intestinal colic, while among those formula fed 30% have suffered from it. The study involved 44% new born female babies and 56% male babies. 52% of mothers were 30-34 years old, 30% 35-59 years old, and 17% 25-59 years old. When it comes to families, the most numerous was a group in good financial situation (60%. The second numerous group was that in average financial situation (40%. All the respondents claimed that they had the knowledge on intestinal colic and the main source of knowledge

  4. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  5. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  6. Power secant method applied to natural frequency extraction of Timoshenko beam structures

    Directory of Open Access Journals (Sweden)

    C.A.N. Dias

    Full Text Available This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named "power deflation", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.

  7. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    International Nuclear Information System (INIS)

    Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana

    2015-01-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  8. A methodological framework applied to the choice of the best method in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2009-01-01

    The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)

  9. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    Science.gov (United States)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  10. Relativistic convergent close-coupling method applied to electron scattering from mercury

    International Nuclear Information System (INIS)

    Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor

    2010-01-01

    We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.

  11. A reflective lens: applying critical systems thinking and visual methods to ecohealth research.

    Science.gov (United States)

    Cleland, Deborah; Wyborn, Carina

    2010-12-01

    Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.

  12. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia

  13. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.

  14. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied

  15. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    Science.gov (United States)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  16. Resonating group method as applied to the spectroscopy of α-transfer reactions

    Science.gov (United States)

    Subbotin, V. B.; Semjonov, V. M.; Gridnev, K. A.; Hefter, E. F.

    1983-10-01

    In the conventional approach to α-transfer reactions the finite- and/or zero-range distorted-wave Born approximation is used in liaison with a macroscopic description of the captured α particle in the residual nucleus. Here the specific example of 16O(6Li,d)20Ne reactions at different projectile energies is taken to present a microscopic resonating group method analysis of the α particle in the final nucleus (for the reaction part the simple zero-range distorted-wave Born approximation is employed). In the discussion of suitable nucleon-nucleon interactions, force number one of the effective interactions presented by Volkov is shown to be most appropriate for the system considered. Application of the continuous analog of Newton's method to the evaluation of the resonating group method equations yields an increased accuracy with respect to traditional methods. The resonating group method description induces only minor changes in the structures of the angular distributions, but it does serve its purpose in yielding reliable and consistent spectroscopic information. NUCLEAR STRUCTURE 16O(6Li,d)20Ne; E=20 to 32 MeV; calculated B(E2); reduced widths, dσdΩ extracted α-spectroscopic factors. ZRDWBA with microscope RGM description of residual α particle in 20Ne; application of continuous analog of Newton's method; tested and applied Volkov force No. 1; direct mechanism.

  17. The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws

    Institute of Scientific and Technical Information of China (English)

    DonghaiLI; XuezhiJIANG; 等

    1997-01-01

    The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.

  18. Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources

    Directory of Open Access Journals (Sweden)

    Alireza Borhani Dariane

    2009-01-01

    Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least  not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.

  19. Methods of applying the 1994 case definition of chronic fatigue syndrome - impact on classification and observed illness characteristics.

    Science.gov (United States)

    Unger, E R; Lin, J-M S; Tian, H; Gurbaxani, B M; Boneva, R S; Jones, J F

    2016-01-01

    Multiple case definitions are in use to identify chronic fatigue syndrome (CFS). Even when using the same definition, methods used to apply definitional criteria may affect results. The Centers for Disease Control and Prevention (CDC) conducted two population-based studies estimating CFS prevalence using the 1994 case definition; one relied on direct questions for criteria of fatigue, functional impairment and symptoms (1997 Wichita; Method 1), and the other used subscale score thresholds of standardized questionnaires for criteria (2004 Georgia; Method 2). Compared to previous reports the 2004 CFS prevalence estimate was higher, raising questions about whether changes in the method of operationalizing affected this and illness characteristics. The follow-up of the Georgia cohort allowed direct comparison of both methods of applying the 1994 case definition. Of 1961 participants (53 % of eligible) who completed the detailed telephone interview, 919 (47 %) were eligible for and 751 (81 %) underwent clinical evaluation including medical/psychiatric evaluations. Data from the 499 individuals with complete data and without exclusionary conditions was available for this analysis. A total of 86 participants were classified as CFS by one or both methods; 44 cases identified by both methods, 15 only identified by Method 1, and 27 only identified by Method 2 (Kappa 0.63; 95 % confidence interval [CI]: 0.53, 0.73 and concordance 91.59 %). The CFS group identified by both methods were more fatigued, had worse functioning, and more symptoms than those identified by only one method. Moderate to severe depression was noted in only one individual who was classified as CFS by both methods. When comparing the CFS groups identified by only one method, those only identified by Method 2 were either similar to or more severely affected in fatigue, function, and symptoms than those only identified by Method 1. The two methods demonstrated substantial concordance. While Method 2

  20. Electrochemical noise measurements techniques and the reversing dc potential drop method applied to stress corrosion essays

    International Nuclear Information System (INIS)

    Aly, Omar Fernandes; Andrade, Arnaldo Paes de; MattarNeto, Miguel; Aoki, Idalina Vieira

    2002-01-01

    This paper aims to collect information and to discuss the electrochemical noise measurements and the reversing dc potential drop method, applied to stress corrosion essays that can be used to evaluate the nucleation and the increase of stress corrosion cracking in Alloy 600 and/or Alloy 182 specimens from Angra I Nuclear Power Plant. Therefore we will pretend to establish a standard procedure to essays to be realized on the new autoclave equipment on the Laboratorio de Eletroquimica e Corrosao do Departamento de Engenharia Quimica da Escola Politecnica da Universidade de Sao Paulo - Electrochemical and Corrosion Laboratory of the Chemical Engineering Department of Polytechnical School of Sao Paulo University, Brazil. (author)

  1. Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction

    Directory of Open Access Journals (Sweden)

    Heng Luo,

    2011-01-01

    Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.

  2. A semiempirical method of applying the dechanneling correction in the extraction of disorder distribution

    International Nuclear Information System (INIS)

    Walker, R.S.; Thompson, D.A.; Poehlman, S.W.

    1977-01-01

    The application of single, plural or multiple scattering theories to the determination of defect dechanneling in channeling-backscattering disorder measurements is re-examined. A semiempirical modification to the method is described that results in making the extracted disorder and disorder distribution relatively insensitive to the scattering model employed. The various models and modifications have been applied to the 1 to 2 MeV He + channeling-backscatter data obtained from 20 to 80 keV H + to Ne + bombarded Si, GaP and GaAs at 50 K and 300 K. (author)

  3. Zoltàn Dörnyei, Research Methods in Applied Linguistics

    OpenAIRE

    Marie-Françoise Narcy-Combes

    2012-01-01

    Research Methods in Applied Linguistics est un ouvrage pratique et accessible qui s’adresse en priorité au chercheur débutant et au doctorant en linguistique appliquée et en didactique des langues pour lesquels il représente un accompagnement fort utile. Son style clair et son organisation sans surprise en font une lecture facile et agréable et rendent les différents concepts aisément compréhensibles pour tous. Il présente un bilan de la méthodologie de la recherche en linguistique appliquée,...

  4. Cork-resin ablative insulation for complex surfaces and method for applying the same

    Science.gov (United States)

    Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)

    1980-01-01

    A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.

  5. Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems

    International Nuclear Information System (INIS)

    Andrade Lima, F.R. de

    1993-01-01

    The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)

  6. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?

    Directory of Open Access Journals (Sweden)

    Yannan Hu

    2017-04-01

    Full Text Available Abstract Background The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. Methods We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. Results All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Conclusion Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects

  7. Preface of "The Second Symposium on Border Zones Between Experimental and Numerical Application Including Solution Approaches By Extensions of Standard Numerical Methods"

    Science.gov (United States)

    Ortleb, Sigrun; Seidel, Christian

    2017-07-01

    In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.

  8. A comparative analysis of three metaheuristic methods applied to fuzzy cognitive maps learning

    Directory of Open Access Journals (Sweden)

    Bruno A. Angélico

    2013-12-01

    Full Text Available This work analyses the performance of three different population-based metaheuristic approaches applied to Fuzzy cognitive maps (FCM learning in qualitative control of processes. Fuzzy cognitive maps permit to include the previous specialist knowledge in the control rule. Particularly, Particle Swarm Optimization (PSO, Genetic Algorithm (GA and an Ant Colony Optimization (ACO are considered for obtaining appropriate weight matrices for learning the FCM. A statistical convergence analysis within 10000 simulations of each algorithm is presented. In order to validate the proposed approach, two industrial control process problems previously described in the literature are considered in this work.

  9. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  10. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    Science.gov (United States)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  11. MAIA - Method for Architecture of Information Applied: methodological construct of information processing in complex contexts

    Directory of Open Access Journals (Sweden)

    Ismael de Moura Costa

    2017-04-01

    Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.

  12. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  13. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  14. Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method

    International Nuclear Information System (INIS)

    Sohrabi, M.; Soltani, Z.

    2016-01-01

    Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6  tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6  alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method

  15. Exact traveling wave solutions of fractional order Boussinesq-like equations by applying Exp-function method

    Directory of Open Access Journals (Sweden)

    Rahmatullah

    2018-03-01

    Full Text Available We have computed new exact traveling wave solutions, including complex solutions of fractional order Boussinesq-Like equations, occurring in physical sciences and engineering, by applying Exp-function method. The method is blended with fractional complex transformation and modified Riemann-Liouville fractional order operator. Our obtained solutions are verified by substituting back into their corresponding equations. To the best of our knowledge, no other technique has been reported to cope with the said fractional order nonlinear problems combined with variety of exact solutions. Graphically, fractional order solution curves are shown to be strongly related to each other and most importantly, tend to fixate on their integer order solution curve. Our solutions comprise high frequencies and very small amplitude of the wave responses. Keywords: Exp-function method, New exact traveling wave solutions, Modified Riemann-Liouville derivative, Fractional complex transformation, Fractional order Boussinesq-like equations, Symbolic computation

  16. A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED

    DEFF Research Database (Denmark)

    2017-01-01

    The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....

  17. Simplified inelastic analysis methods applied to fast breeder reactor core design

    International Nuclear Information System (INIS)

    Abo-El-Ata, M.M.

    1978-01-01

    The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented

  18. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  19. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  20. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    Energy Technology Data Exchange (ETDEWEB)

    Missirlian, M. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France)], E-mail: marc.missirlian@cea.fr; Traxler, H. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria); Boscary, J. [Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Boltzmannstr. 2, D-85748 Garching (Germany); Durocher, A.; Escourbiac, F.; Schlosser, J. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France); Schedler, B.; Schuler, P. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria)

    2007-10-15

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application.

  1. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Traxler, H.; Boscary, J.; Durocher, A.; Escourbiac, F.; Schlosser, J.; Schedler, B.; Schuler, P.

    2007-01-01

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application

  2. The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation

    Science.gov (United States)

    Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.

    1992-05-01

    Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.

  3. Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations

    Science.gov (United States)

    Atkins, Harold L.

    2009-01-01

    The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.

  4. Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods

    Directory of Open Access Journals (Sweden)

    Heide Lukosch

    2018-03-01

    Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation.  The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.

  5. An input feature selection method applied to fuzzy neural networks for signal esitmation

    International Nuclear Information System (INIS)

    Na, Man Gyun; Sim, Young Rok

    2001-01-01

    It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods

  6. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    Science.gov (United States)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  7. Developing digital technologies for university mathematics by applying participatory design methods

    DEFF Research Database (Denmark)

    Triantafyllou, Eva; Timcenko, Olga

    2013-01-01

    This paper presents our research efforts to develop digital technologies for undergraduate university mathematics. We employ participatory design methods in order to involve teachers and students in the design of such technologies. The results of the first round of our design are included...

  8. A modified Poisson-Boltmann model including charge regulation for the adsorption of ionizable polyelectrolytes to charged interfaces, applied to lysozyme adsorption on silica

    NARCIS (Netherlands)

    Biesheuvel, P.M.; Veen, van der M.; Norde, W.

    2005-01-01

    The equilibrium adsorption of polyelectrolytes with multiple types of ionizable groups is described using a modified Poisson-Boltzmann equation including charge regulation of both the polymer and the interface. A one-dimensional mean-field model is used in which the electrostatic potential is

  9. Applying the model of Goal-Directed Behavior, including descriptive norms, to physical activity intentions: A contribution to improving the Theory of Planned Behavior

    Science.gov (United States)

    The theory of planned behavior (TPB) has received its fair share of criticism lately, including calls for it to retire. We contributed to improving the theory by testing extensions such as the model of goal-directed behavior (MGDB, which adds desire and anticipated positive and negative emotions) ap...

  10. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  11. Labile soil phosphorus as influenced by methods of applying radioactive phosphorus

    International Nuclear Information System (INIS)

    Selvaratnam, V.V.; Andersen, A.J.; Thomsen, J.D.; Gissel-Nielsen, G.

    1980-03-01

    The influence of different methods of applying radioactive phosphorus on the E- and L-values was studied in four foil types using barley, buckwheat, and rye grass for the L-value determination. The four soils differed greatly in their E- and L-values. The experiment was carried out both with and without carrier-P. The presence of carrier-P had no influence on the E-values, while carrier-P in some cases gave a lower L-value. Both E- and L-values dependent on the method of application. When the 32 P was applied on a small soil or sand sample and dried before mixing with the total amount of soil, the E-values were higher than at direct application most likely because of a stronger fixation to the soil/sand particles. This was not the case for the L-values that are based on a much longer equilibrium time. On the contrary, the direct application of the 32 p-solution to the whole amount of soil gave higher L-values of a non-homogeneous distribution of the 32 p in the soil. (author)

  12. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?

    Science.gov (United States)

    Hu, Yannan; van Lenthe, Frank J; Hoffmann, Rasmus; van Hedel, Karen; Mackenbach, Johan P

    2017-04-20

    The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use of the

  13. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.

    2017-12-05

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  14. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2017-01-01

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds' features may improve model's performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  15. TRANSAT-- method for detecting the conserved helices of functional RNA structures, including transient, pseudo-knotted and alternative structures.

    Science.gov (United States)

    Wiebe, Nicholas J P; Meyer, Irmtraud M

    2010-06-24

    The prediction of functional RNA structures has attracted increased interest, as it allows us to study the potential functional roles of many genes. RNA structure prediction methods, however, assume that there is a unique functional RNA structure and also do not predict functional features required for in vivo folding. In order to understand how functional RNA structures form in vivo, we require sophisticated experiments or reliable prediction methods. So far, there exist only a few, experimentally validated transient RNA structures. On the computational side, there exist several computer programs which aim to predict the co-transcriptional folding pathway in vivo, but these make a range of simplifying assumptions and do not capture all features known to influence RNA folding in vivo. We want to investigate if evolutionarily related RNA genes fold in a similar way in vivo. To this end, we have developed a new computational method, Transat, which detects conserved helices of high statistical significance. We introduce the method, present a comprehensive performance evaluation and show that Transat is able to predict the structural features of known reference structures including pseudo-knotted ones as well as those of known alternative structural configurations. Transat can also identify unstructured sub-sequences bound by other molecules and provides evidence for new helices which may define folding pathways, supporting the notion that homologous RNA sequence not only assume a similar reference RNA structure, but also fold similarly. Finally, we show that the structural features predicted by Transat differ from those assuming thermodynamic equilibrium. Unlike the existing methods for predicting folding pathways, our method works in a comparative way. This has the disadvantage of not being able to predict features as function of time, but has the considerable advantage of highlighting conserved features and of not requiring a detailed knowledge of the cellular

  16. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  17. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  18. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    Directory of Open Access Journals (Sweden)

    Laura Susana Vargas-Valencia

    2016-12-01

    Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  19. The Cn method applied to problems with an anisotropic diffusion law

    International Nuclear Information System (INIS)

    Grandjean, P.M.

    A 2-dimensional Cn calculation has been applied to homogeneous media subjected to the Rayleigh impact law. Results obtained with collision probabilities and Chandrasekhar calculations are compared to those from Cn method. Introducing in the expression of the transport equation, an expansion truncated on a polynomial basis for the outgoing angular flux (or possibly entrance flux) gives two Cn systems of algebraic linear equations for the expansion coefficients. The matrix elements of these equations are the moments of the Green function in infinite medium. The search for the Green function is effected through the Fourier transformation of the integrodifferential equation and its moments are derived from their Fourier transforms through a numerical integration in the complex plane. The method has been used for calculating the albedo in semi-infinite media, the extrapolation length of the Milne problem, and the albedo and transmission factor of a slab (a concise study of convergence is presented). A system of integro-differential equations bearing on the moments of the angular flux inside the medium has been derived, for the collision probability method. It is numerically solved with approximately the bulk flux by step functions. The albedo in semi-infinite medium has also been computed through the semi-analytical Chandrasekhar method. In the latter, the outgoing flux is expressed as a function of the entrance flux by means of a integral whose kernel is numerically derived [fr

  20. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  1. Computational performance of Free Mesh Method applied to continuum mechanics problems

    Science.gov (United States)

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  2. Review on applied foods and analyzed methods in identification testing of irradiated foods

    International Nuclear Information System (INIS)

    Kim, Kwang Hoon; Lee, Hoo Chul; Park, Sung Hyun; Kim, Soo Jin; Kim, Kwan Soo; Jeong, Il Yun; Lee, Ju Woon; Yook, Hong Sun

    2010-01-01

    Identification methods of irradiated foods have been adopted as official test by EU and Codex. PSL, TL, ESR and GC/MS methods were registered in Korea food code on 2009 and put in force as control system of verification for labelling of food irradiation. But most generally applicable PSL and TL methods are specified applicable foods according to domestic approved items. Unlike these specifications, foods unpermitted in Korea are included in applicable items of ESR and GC/MS methods. According to recent research data, numerous food groups are possible to effective legal control by identification and these items are demanded to permit regulations for irradiation additionally. Especially, the prohibition of irradiation for meats or seafoods is not harmonized with international standards and interacts as trade friction or industrial restrictions due to unprepared domestic regulation. Hence, extension of domestic legal permission for food irradiation can contrive to related industrial development and also can reduce trade friction and enhance international competitiveness

  3. Applying of whole-tree harvesting method; Kokopuujuontomenetelmaen soveltaminen aines- ja energiapuun hankintaan

    Energy Technology Data Exchange (ETDEWEB)

    Vesisenaho, T [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S [VTT Manufacturing Technology, Espoo (Finland)

    1997-12-01

    The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed

  4. Brucellosis Prevention Program: Applying “Child to Family Health Education” Method

    Directory of Open Access Journals (Sweden)

    H. Allahverdipour

    2010-04-01

    Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.

  5. Evaluation of cleaning methods applied in home environments after renovation and remodeling activities

    International Nuclear Information System (INIS)

    Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.

    2004-01-01

    We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that

  6. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  7. Current Methods Applied to Biomaterials - Characterization Approaches, Safety Assessment and Biological International Standards.

    Science.gov (United States)

    Oliveira, Justine P R; Ortiz, H Ivan Melendez; Bucio, Emilio; Alves, Patricia Terra; Lima, Mayara Ingrid Sousa; Goulart, Luiz Ricardo; Mathor, Monica B; Varca, Gustavo H C; Lugao, Ademar B

    2018-04-10

    Safety and biocompatibility assessment of biomaterials are themes of constant concern as advanced materials enter the market as well as products manufactured by new techniques emerge. Within this context, this review provides an up-to-date approach on current methods for the characterization and safety assessment of biomaterials and biomedical devices from a physicalchemical to a biological perspective, including a description of the alternative methods in accordance with current and established international standards. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  9. Postgraduate Education in Quality Improvement Methods: Initial Results of the Fellows' Applied Quality Training (FAQT) Curriculum.

    Science.gov (United States)

    Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp

    2016-06-01

    Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.

  10. Applying system engineering methods to site characterization research for nuclear waste repositories

    International Nuclear Information System (INIS)

    Woods, T.W.

    1985-01-01

    Nuclear research and engineering projects can benefit from the use of system engineering methods. This paper is brief overview illustrating how system engineering methods could be applied in structuring a site characterization effort for a candidate nuclear waste repository. System engineering is simply an orderly process that has been widely used to transform a recognized need into a fully defined system. Such a system may be physical or abstract, natural or man-made, hardware or procedural, as is appropriate to the system's need or objective. It is a way of mentally visualizing all the constituent elements and their relationships necessary to fulfill a need, and doing so compliant with all constraining requirements attendant to that need. Such a system approach provides completeness, order, clarity, and direction. Admittedly, system engineering can be burdensome and inappropriate for those project objectives having simple and familiar solutions that are easily held and controlled mentally. However, some type of documented and structured approach is needed for those objectives that dictate extensive, unique, or complex programs, and/or creation of state-of-the-art machines and facilities. System engineering methods have been used extensively and successfully in these cases. The scientific methods has served well in ordering countless technical undertakings that address a specific question. Similarly, conventional construction and engineering job methods will continue to be quite adequate to organize routine building projects. Nuclear waste repository site characterization projects involve multiple complex research questions and regulatory requirements that interface with each other and with advanced engineering and subsurface construction techniques. There is little doubt that system engineering is an appropriate orchestrating process to structure such diverse elements into a cohesive, well defied project

  11. A Precise Method for Cloth Configuration Parsing Applied to Single-Arm Flattening

    Directory of Open Access Journals (Sweden)

    Li Sun

    2016-04-01

    Full Text Available In this paper, we investigate the contribution that visual perception affords to a robotic manipulation task in which a crumpled garment is flattened by eliminating visually detected wrinkles. In order to explore and validate visually guided clothing manipulation in a repeatable and controlled environment, we have developed a hand-eye interactive virtual robot manipulation system that incorporates a clothing simulator to close the effector-garment-visual sensing interaction loop. We present the technical details and compare the performance of two different methods for detecting, representing and interpreting wrinkles within clothing surfaces captured in high-resolution depth maps. The first method we present relies upon a clustering-based method for localizing and parametrizing wrinkles, while the second method adopts a more advanced geometry-based approach in which shape-topology analysis underpins the identification of the cloth configuration (i.e., maps wrinkles. Having interpreted the state of the cloth configuration by means of either of these methods, a heuristic-based flattening strategy is then executed to infer the appropriate forces, their directions and gripper contact locations that must be applied to the cloth in order to flatten the perceived wrinkles. A greedy approach, which attempts to flatten the largest detected wrinkle for each perception-iteration cycle, has been successfully adopted in this work. We present the results of our heuristic-based flattening methodology which relies upon clustering-based and geometry-based features respectively. Our experiments indicate that geometry-based features have the potential to provide a greater degree of clothing configuration understanding and, as a consequence, improve flattening performance. The results of experiments using a real robot (as opposed to simulated robot also confirm our proposition that a more effective visual perception system can advance the performance of cloth

  12. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  13. CO2 (carbon dioxide) fixation by applying new chemical absorption-precipitation methods

    International Nuclear Information System (INIS)

    Park, Sangwon; Lee, Min-Gu; Park, Jinwon

    2013-01-01

    CO 2 (carbon dioxide) is the most common greenhouse gas and most of it is emitted from human activities. The methods for CO 2 emission reduction can be divided into physical, chemical, and biochemical methods. Among the physical and chemical methods, CCS (carbon capture and storage) is a well-known reducing technology. However, this method has many disadvantages including the required storage area. In general, CCS requires capture and storage processes. In this study, we propose a method for reusing the absorbed CO 2 either in nature or in industry. The emitted CO 2 was converted into CO 3 2− using a conversion solution, and then made into a carbonate by combining the conversion solution with metal ions at normal temperature and pressure. The resulting carbonate was analyzed using FT-IR (Fourier transform infrared spectroscopy) and XRD (X-ray diffraction). We verified the formation of a solid consisting of calcite and vaterite. In addition, the conversion solution that was used could be reused in the same process of CCS technology. Our study demonstrates a successful method of reducing and reusing emitted CO 2 , thereby making CO 2 a potential future resource. - Highlights: • This study focused on a new CO 2 fixation process method. • In CCS technology, the desorption process requires high thermal energy consumption. • This new method does not require a desorption process because the CO 2 is accomplished through CaCO 3 crystallization. • A new absorption method is possible instead of the conventional absorption-desorption process. • This is not only a rapid reaction for fixing CO 2 , but also economically feasible

  14. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Directory of Open Access Journals (Sweden)

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  15. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  16. IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  17. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    Science.gov (United States)

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.

  18. IAEA-ASSET`s root cause analysis method applied to sodium leakage incident at Monju

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Norio; Hirano, Masashi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  19. A Study on the quantification of hydration and the strength development mechanism of cementitious materials including amorphous phases by using XRD/Rietveld method

    International Nuclear Information System (INIS)

    Yamada, Kazuo; Hoshino, Seiichi; Hirao, Hiroshi; Yamashita, Hiroki

    2008-01-01

    X-ray diffraction (XRD)/Rietveld method was applied to measure the phase composition of cement. The quantative analysis concerning the progress of hydration was accomplished in an error of about the maximum 2-3% in spite of including amorphous materials such as blast furnace slag, fly ash, silica fume and C-S-H. The influence of the compressive strength on the lime stone fine powder mixture material was studied from the hydration analysis by Rietveld method. The two stages were observed in the strength development mechanism of cement; the hydration promotion of C 3 S in the early stage and the filling of cavities by carbonate hydrate for the longer term. It is useful to use various mixture materials for the formation of the resource recycling society and the durability improvement of concrete. (author)

  20. [Influence of Sex and Age on Contrast Sensitivity Subject to the Applied Method].

    Science.gov (United States)

    Darius, Sabine; Bergmann, Lisa; Blaschke, Saskia; Böckelmann, Irina

    2018-02-01

    The aim of the study was to detect gender and age differences in both photopic and mesopic contrast sensitivity with different methods in relation to German driver's license regulations (Fahrerlaubnisverordnung; FeV). We examined 134 healthy volunteers (53 men, 81 women) with an age between 18 and 76 years, that had been divided into two groups (AG I Mars charts under standardized illumination were applied for photopic contrast sensitivity. We could not find any gender differences. When evaluating age, there were no differences between the two groups for the Mars charts nor in the Rodatest. In all other tests, the younger volunteers achieved significantly better results. For contrast vision, there exists age-adapted cut-off-values. Concerning the driving safety of traffic participants, sufficient photopic and mesopic contrast vision should be focused on, independent of age. Therefore, there is a need to reconsider the age-adapted cut-off-values. Georg Thieme Verlag KG Stuttgart · New York.

  1. Study of different ultrasonic focusing methods applied to non destructive testing

    International Nuclear Information System (INIS)

    El Amrani, M.

    1995-01-01

    The work presented in this thesis concerns the study of different ultrasonic focusing techniques applied to Nondestructive Testing (mechanical focusing and electronic focusing) and compares their capabilities. We have developed a model to predict the ultrasonic field radiated into a solid by water-coupled transducers. The model is based upon the Rayleigh integral formulation, modified to take account the refraction at the liquid-solid interface. The model has been validated by numerous experiments in various configurations. Running this model and the associated software, we have developed new methods to optimize focused transducers and studied the characteristics of the beam generated by transducers using various focusing techniques. (author). 120 refs., 95 figs., 4 appends

  2. Numerical method of applying shadow theory to all regions of multilayered dielectric gratings in conical mounting.

    Science.gov (United States)

    Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro

    2016-11-01

    Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. Applying the shadow theory to the matrix eigenvalues method, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.

  3. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    M. Macků

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.

  4. Simulation by the method of inverse cumulative distribution function applied in optimising of foundry plant production

    Directory of Open Access Journals (Sweden)

    J. Szymszal

    2009-01-01

    Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.

  5. Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system

    Science.gov (United States)

    Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew

    2016-05-01

    Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.

  6. Adding randomness controlling parameters in GRASP method applied in school timetabling problem

    Directory of Open Access Journals (Sweden)

    Renato Santos Pereira

    2017-09-01

    Full Text Available This paper studies the influence of randomness controlling parameters (RCP in first stage GRASP method applied in graph coloring problem, specifically school timetabling problems in a public high school. The algorithm (with the inclusion of RCP was based on critical variables identified through focus groups, whose weights can be adjusted by the user in order to meet the institutional needs. The results of the computational experiment, with 11-year-old data (66 observations processed at the same high school show that the inclusion of RCP leads to significantly lowering the distance between initial solutions and local minima. The acceptance and the use of the solutions found allow us to conclude that the modified GRASP, as has been constructed, can make a positive contribution to this timetabling problem of the school in question.

  7. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  8. Applied methods for mitigation of damage by stress corrosion in BWR type reactors

    International Nuclear Information System (INIS)

    Hernandez C, R.; Diaz S, A.; Gachuz M, M.; Arganis J, C.

    1998-01-01

    The Boiling Water nuclear Reactors (BWR) have presented stress corrosion problems, mainly in components and pipes of the primary system, provoking negative impacts in the performance of energy generator plants, as well as the increasing in the radiation exposure to personnel involucred. This problem has caused development of research programs, which are guided to find solution alternatives for the phenomena control. Among results of greater relevance the control for the reactor water chemistry stands out particularly in the impurities concentration and oxidation of radiolysis products; as well as the supervision in the materials selection and the stresses levels reduction. The present work presents the methods which can be applied to diminish the problems of stress corrosion in BWR reactors. (Author)

  9. An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring

    Science.gov (United States)

    Buratynski, E. K.; Caughey, D. A.

    1984-01-01

    An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.

  10. Structural characterization of complex systems by applying a combination of scattering and spectroscopic methods

    International Nuclear Information System (INIS)

    Klose, G.

    1999-01-01

    Lyotropic mesophases possess lattice dimensions of the order of magnitude of the length of their molecules. Consequently, the first Bragg reflections of such systems appear at small scattering angles (small angle scattering). A combination of scattering and NMR methods was applied to study structural properties of POPC/C 12 E n mixtures. Generally, the ranges of existence of the liquid crystalline lamellar phase, the dimension of the unit-cell of the lamellae and important structural parameters of the lipid and surfactant molecules in the mixed bilayers were determined. With that the POPC/C 12 E 4 bilayer represents one of the best structurally characterized mixed model membranes. It is a good starting system for studying the interrelation with other e.g. dynamic or thermodynamic properties. (K.A.)

  11. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    Macků M.

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to the production of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out what dimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mould production, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself. Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically. A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meet requirements specified by a customer were the results.

  12. Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion

    Directory of Open Access Journals (Sweden)

    Hui Li

    2017-01-01

    Full Text Available Since WorldView-2 (WV-2 images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS and panchromatic (PAN bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies.

  13. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    Science.gov (United States)

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  14. Evaluation and comparison of multiple test methods, including real-time PCR, for Legionella detection in clinical specimens.

    Directory of Open Access Journals (Sweden)

    Adriana Peci

    2016-08-01

    Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.

  15. Exact traveling wave solutions of fractional order Boussinesq-like equations by applying Exp-function method

    Science.gov (United States)

    Rahmatullah; Ellahi, Rahmat; Mohyud-Din, Syed Tauseef; Khan, Umar

    2018-03-01

    We have computed new exact traveling wave solutions, including complex solutions of fractional order Boussinesq-Like equations, occurring in physical sciences and engineering, by applying Exp-function method. The method is blended with fractional complex transformation and modified Riemann-Liouville fractional order operator. Our obtained solutions are verified by substituting back into their corresponding equations. To the best of our knowledge, no other technique has been reported to cope with the said fractional order nonlinear problems combined with variety of exact solutions. Graphically, fractional order solution curves are shown to be strongly related to each other and most importantly, tend to fixate on their integer order solution curve. Our solutions comprise high frequencies and very small amplitude of the wave responses.

  16. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  17. Infrared thermography inspection methods applied to the target elements of W7-X Divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Durocher, A.; Schlosser, J.; Farjon, J.-L.; Vignal, N.; Traxler, H.; Schedler, B.; Boscary, J.

    2006-01-01

    As heat exhaust capability and lifetime of plasma-facing component (PFC) during in-situ operation are linked to the manufacturing quality, a set of non-destructive testing must be operated during R-and-D and manufacturing phases. Within this framework, advanced non-destructive examination (NDE) methods are one of the key issues to achieve a high level of quality and reliability of joining techniques in the production of high heat flux components but also to develop and built successfully PFCs for a next generation of fusion devices. In this frame, two NDE infrared thermographic approaches, which have been recently applied to the qualification of CFC target elements of the W7-X divertor during the first series production will be discussed in this paper. The first one, developed by CEA (SATIR facility) and used with successfully to the control of the mass-produced actively cooled PFCs on Tore Supra, is based on the transient thermography where the testing protocol consists in inducing a thermal transient within the heat sink structure by an alternative hot/cold water flow. The second one, recently developed by PLANSEE (ARGUS facility), is based on the pulsed thermography where the component is heated externally by a single powerful flash of light. Results obtained on qualification experiences performed during the first series production of W7-X divertor components representing about thirty mock-ups with artificial and manufacturing defects, demonstrated the capabilities of these two methods and raised the efficiency of inspection to a level which is appropriate for industrial application. This comparative study, associated to a cross-checking analysis between the high heat flux performance tests and these inspection methods by infrared thermography, showed a good reproducibility and allowed to set a detectable limit specific at each method. Finally, the detectability of relevant defects showed excellent coincidence with thermal images obtained from high heat flux

  18. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  19. Simulation methods to estimate design power: an overview for applied research.

    Science.gov (United States)

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  20. Optimal Control as a method for Diesel engine efficiency assessment including pressure and NO_x constraints

    International Nuclear Information System (INIS)

    Guardiola, Carlos; Climent, Héctor; Pla, Benjamín; Reig, Alberto

    2017-01-01

    Highlights: • Optimal Control is applied for heat release shaping in internal combustion engines. • Optimal Control allows to assess the engine performance with a realistic reference. • The proposed method gives a target heat release law to define control strategies. - Abstract: The present paper studies the optimal heat release law in a Diesel engine to maximise the indicated efficiency subject to different constraints, namely: maximum cylinder pressure, maximum cylinder pressure derivative, and NO_x emission restrictions. With this objective, a simple but also representative model of the combustion process has been implemented. The model consists of a 0D energy balance model aimed to provide the pressure and temperature evolutions in the high pressure loop of the engine thermodynamic cycle from the gas conditions at the intake valve closing and the heat release law. The gas pressure and temperature evolutions allow to compute the engine efficiency and NO_x emissions. The comparison between model and experimental results shows that despite the model simplicity, it is able to reproduce the engine efficiency and NO_x emissions. After the model identification and validation, the optimal control problem is posed and solved by means of Dynamic Programming (DP). Also, if only pressure constraints are considered, the paper proposes a solution that reduces the computation cost of the DP strategy in two orders of magnitude for the case being analysed. The solution provides a target heat release law to define injection strategies but also a more realistic maximum efficiency boundary than the ideal thermodynamic cycles usually employed to estimate the maximum engine efficiency.

  1. Method for contamination control and barrier apparatus with filter for containing waste materials that include dangerous particulate matter

    Science.gov (United States)

    Pinson, Paul A.

    1998-01-01

    A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.

  2. Contributors to Frequent Telehealth Alerts Including False Alerts for Patients with Heart Failure: A Mixed Methods Exploration

    Science.gov (United States)

    Radhakrishna, K.; Bowles, K.; Zettek-Sumner, A.

    2013-01-01

    Summary Background Telehealth data overload through high alert generation is a significant barrier to sustained adoption of telehealth for managing HF patients. Objective To explore the factors contributing to frequent telehealth alerts including false alerts for Medicare heart failure (HF) patients admitted to a home health agency. Materials and Methods A mixed methods design that combined quantitative correlation analysis of patient characteristic data with number of telehealth alerts and qualitative analysis of telehealth and visiting nurses’ notes on follow-up actions to patients’ telehealth alerts was employed. All the quantitative and qualitative data was collected through retrospective review of electronic records of the home heath agency. Results Subjects in the study had a mean age of 83 (SD = 7.6); 56% were female. Patient co-morbidities (ppatient characteristics along with establishing patient-centered telehealth outcome goals may allow meaningful generation of telehealth alerts. Reducing avoidable telehealth alerts could vastly improve the efficiency and sustainability of telehealth programs for HF management. PMID:24454576

  3. Method for contamination control and barrier apparatus with filter for containing waste materials that include dangerous particulate matter

    International Nuclear Information System (INIS)

    Pinson, P.A.

    1998-01-01

    A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs

  4. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  5. Applying the Weighted Horizontal Magnetic Gradient Method to a Simulated Flaring Active Region

    Science.gov (United States)

    Korsós, M. B.; Chatterjee, P.; Erdélyi, R.

    2018-04-01

    Here, we test the weighted horizontal magnetic gradient (WG M ) as a flare precursor, introduced by Korsós et al., by applying it to a magnetohydrodynamic (MHD) simulation of solar-like flares. The preflare evolution of the WG M and the behavior of the distance parameter between the area-weighted barycenters of opposite-polarity sunspots at various heights is investigated in the simulated δ-type sunspot. Four flares emanated from this sunspot. We found the optimum heights above the photosphere where the flare precursors of the WG M method are identifiable prior to each flare. These optimum heights agree reasonably well with the heights of the occurrence of flares identified from the analysis of their thermal and ohmic heating signatures in the simulation. We also estimated the expected time of the flare onsets from the duration of the approaching–receding motion of the barycenters of opposite polarities before each single flare. The estimated onset time and the actual time of occurrence of each flare are in good agreement at the corresponding optimum heights. This numerical experiment further supports the use of flare precursors based on the WG M method.

  6. Study on safety of crystallization method applied to dissolver solution in fast breeder reactor reprocessing

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Fujine, Yukio; Asakura, Toshihide; Murazaki, Minoru; Koyama, Tomozo; Sakakibara, Tetsuro; Shibata, Atsuhiro

    1999-03-01

    The crystallization method is proposed to apply for recovery of uranium from dissolution liquid, enabling to reduce handling materials in later stages of reprocessing used fast breeder reactor (FBR) fuels. This report studies possible safety problems accompanied by the proposed method. Crystallization process was first defined in the whole reprocessing process, and the quantity and the kind of treated fuel were specified. Possible problems, such as criticality, shielding, fire/explosion, and confinement, were then investigated; and the events that might induce accidental incidents were discussed. Criticality, above all the incidents, was further studied by considering exampled criticality control of the crystallization process. For crystallization equipment, in particular, evaluation models were set up in normal and accidental operation conditions. Related data were selected out from the nuclear criticality safety handbooks. The theoretical densities of plutonium nitrates, which give basic and important information, were estimated in this report based on the crystal structure data. The criticality limit of crystallization equipment was calculated based on the above information. (author)

  7. Method of moments as applied to arbitrarily shaped bounded nonlinear scatterers

    Science.gov (United States)

    Caorsi, Salvatore; Massa, Andrea; Pastorino, Matteo

    1994-01-01

    In this paper, we explore the possibility of applying the moment method to determine the electromagnetic field distributions inside three-dimensional bounded nonlinear dielectric objects of arbitrary shapes. The moment method has usually been employed to solve linear scattering problems. We start with an integral equation formulation, and derive a nonlinear system of algebraic equations that allows us to obtain an approximate solution for the harmonic vector components of the electric field. Preliminary results of some numerical simulations are reported. Dans cet article nous explorons la possibilité d'appliquer la méthode des moments pour déterminer la distribution du champ électromagnétique dans des objets tridimensionnels diélectriques, non-linéaires, limités et de formes arbitraires. La méthode des moments a été communément employée pour les problèmes de diffusion linéaire. Nous commençons par une formulation basée sur l'équation intégrale et nous dérivons un système non-linéaire d'équations algébriques qui nous permet d'obtenir une solution approximative pour les composantes harmoniques du vecteur du champ électrique. Les résultats préliminaires de quelques simulations numériques sont présentés.

  8. [An experimental assessment of methods for applying intestinal sutures in intestinal obstruction].

    Science.gov (United States)

    Akhmadudinov, M G

    1992-04-01

    The results of various methods used in applying intestinal sutures in obturation were studied. Three series of experiments were conducted on 30 dogs--resection of the intestine after obstruction with the formation of anastomoses by means of double-row suture (Albert--Shmiden--Lambert) in the first series (10 dogs), by a single-row suture after V. M. Mateshchuk [correction of Mateshuku] in the second series, and bu a single-row stretching suture suggested by the author in the third series. The postoperative complications and the parameters of physical airtightness of the intestinal anastomosis were studied in dynamics in the experimental animals. The results of the study: incompetence of the anastomosis sutures in the first series 6, in the second 4, and in the third series one. Adhesions occurred in all animals of the first and second series and in 2 of the third series. Six dogs of the first series died, 4 of the second, and one of the third. Study of the dynamics of the results showed a direct connection of the complications with the parameters of the physical airtightness of the anastomosis, and the last-named with the method of the intestinal suture. Relatively better results were noted in formation of the anastomosis by means of our suggested stretshing continuous suture passed through the serous, muscular, and submucous coats of the intestine.

  9. A sensitive multi-residue method for the determination of 35 micropollutants including pharmaceuticals, iodinated contrast media and pesticides in water.

    Science.gov (United States)

    Valls-Cantenys, Carme; Scheurer, Marco; Iglesias, Mònica; Sacher, Frank; Brauch, Heinz-Jürgen; Salvadó, Victoria

    2016-09-01

    A sensitive, multi-residue method using solid-phase extraction followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed to determine a representative group of 35 analytes, including corrosion inhibitors, pesticides and pharmaceuticals such as analgesic and anti-inflammatory drugs, five iodinated contrast media, β-blockers and some of their metabolites and transformation products in water samples. Few other methods are capable of determining such a broad range of contrast media together with other analytes. We studied the parameters affecting the extraction of the target analytes, including sorbent selection and extraction conditions, their chromatographic separation (mobile phase composition and column) and detection conditions using two ionisation sources: electrospray ionisation (ESI) and atmospheric pressure chemical ionisation (APCI). In order to correct matrix effects, a total of 20 surrogate/internal standards were used. ESI was found to have better sensitivity than APCI. Recoveries ranging from 79 to 134 % for tap water and 66 to 144 % for surface water were obtained. Intra-day precision, calculated as relative standard deviation, was below 34 % for tap water and below 21 % for surface water, groundwater and effluent wastewater. Method quantification limits (MQL) were in the low ng L(-1) range, except for the contrast agents iomeprol, amidotrizoic acid and iohexol (22, 25.5 and 17.9 ng L(-1), respectively). Finally, the method was applied to the analysis of 56 real water samples as part of the validation procedure. All of the compounds were detected in at least some of the water samples analysed. Graphical Abstract Multi-residue method for the determination of micropollutants including pharmaceuticals, iodinated contrast media and pesticides in waters by LC-MS/MS.

  10. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  11. Difference in target definition using three different methods to include respiratory motion in radiotherapy of lung cancer.

    Science.gov (United States)

    Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe; Khalil, Azza Ahmed; Holt, Marianne Ingerslev; Kandi, Maria; Hoffmann, Lone

    2017-11-01

    Minimizing the planning target volume (PTV) while ensuring sufficient target coverage during the entire respiratory cycle is essential for free-breathing radiotherapy of lung cancer. Different methods are used to incorporate the respiratory motion into the PTV. Fifteen patients were analyzed. Respiration can be included in the target delineation process creating a respiratory GTV, denoted iGTV. Alternatively, the respiratory amplitude (A) can be measured based on the 4D-CT and A can be incorporated in the margin expansion. The GTV expanded by A yielded GTV + resp, which was compared to iGTV in terms of overlap. Three methods for PTV generation were compared. PTV del (delineated iGTV expanded to CTV plus PTV margin), PTV σ (GTV expanded to CTV and A was included as a random uncertainty in the CTV to PTV margin) and PTV ∑ (GTV expanded to CTV, succeeded by CTV linear expansion by A to CTV + resp, which was finally expanded to PTV ∑ ). Deformation of tumor and lymph nodes during respiration resulted in volume changes between the respiratory phases. The overlap between iGTV and GTV + resp showed that on average 7% of iGTV was outside the GTV + resp implying that GTV + resp did not capture the tumor during the full deformable respiration cycle. A comparison of the PTV volumes showed that PTV σ was smallest and PTV Σ largest for all patients. PTV σ was in mean 14% (31 cm 3 ) smaller than PTV del , while PTV del was 7% (20 cm 3 ) smaller than PTV Σ . PTV σ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTV del ) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger treatment volumes. PTV Σ should not be used, since it incorporates the disadvantages of both PTV del and PTV σ .

  12. Chemometric methods and near-infrared spectroscopy applied to bioenergy production

    International Nuclear Information System (INIS)

    Liebmann, B.

    2010-01-01

    data analysis (i) successfully determine the concentrations of moisture, protein, and starch in the feedstock material as well as glucose, ethanol, glycerol, lactic acid, acetic acid in the processed bioethanol broths; (ii) and allow quantifying a complex biofuel's property such as the heating value. At the third stage, this thesis focuses on new chemometric methods that improve mathematical analysis of multivariate data such as NIR spectra. The newly developed method 'repeated double cross validation' (rdCV) separates optimization of regression models from tests of model performance; furthermore, rdCV estimates the variability of the model performance based on a large number of prediction errors from test samples. The rdCV procedure has been applied to both the classical PLS regression and the robust 'partial robust M' regression method, which can handle erroneous data. The peculiar and relatively unknown 'random projection' method is tested for its potential of dimensionality reduction of data from chemometrics and chemoinformatics. The main findings are: (i) rdCV fosters a realistic assessment of model performance, (ii) robust regression has outstanding performance for data containing outliers and thus is strongly recommendable, and (iii) random projection is a useful niche application for high-dimensional data combined with possible restrictions in data storage and computing time. The three chemometric methods described are available as functions for the free software R. (author) [de

  13. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  14. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  15. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    Science.gov (United States)

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Methods for characterization of wafer-level encapsulation applied on silicon to LTCC anodic bonding

    International Nuclear Information System (INIS)

    Khan, M F; Ghavanini, F A; Enoksson, P; Haasl, S; Löfgren, L; Persson, K; Rusu, C; Schjølberg-Henriksen, K

    2010-01-01

    This paper presents initial results on generic characterization methods for wafer-level encapsulation. The methods, developed specifically to evaluate anodic bonding of low-temperature cofired ceramics (LTCC) to Si, are generally applicable to wafer-level encapsulation. Different microelectromechanical system (MEMS) structures positioned over the whole wafer provide local information about the bond quality. The structures include (i) resonating cantilevers as pressure sensors for bond hermeticity, (ii) resonating bridges as stress sensors for measuring the stress induced by the bonding and (iii) frames/mesas for pull tests. These MEMS structures have been designed, fabricated and characterized indicating that local information can easily be obtained. Buried electrodes to enable localized bonding have been implemented and their effectiveness is indicated from first results of the novel Si to LTCC anodic bonding.

  17. Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2007-01-01

    and backward computations are carried out by using the model under consideration and its adjoint equations (both the model and its adjoint are defined by systems of differential equations). The major difficulty is caused by the huge increase of the computational load (normally by a factor more than 100...... assimilation method (numerical algorithms for solving differential equations, splitting procedures and optimization algorithms) have been studied by using these tests. The presentation will include results from testing carried out in the study.......he variational data assimilation methods can successfully be used in different fields of science and engineering. An attempt to utilize available sets of observations in the efforts to improve (i) the models used to study different phenomena (ii) the model results is systematically carried out when...

  18. Applying Critical Race Theory to Group Model Building Methods to Address Community Violence.

    Science.gov (United States)

    Frerichs, Leah; Lich, Kristen Hassmiller; Funchess, Melanie; Burrell, Marcus; Cerulli, Catherine; Bedell, Precious; White, Ann Marie

    2016-01-01

    Group model building (GMB) is an approach to building qualitative and quantitative models with stakeholders to learn about the interrelationships among multilevel factors causing complex public health problems over time. Scant literature exists on adapting this method to address public health issues that involve racial dynamics. This study's objectives are to (1) introduce GMB methods, (2) present a framework for adapting GMB to enhance cultural responsiveness, and (3) describe outcomes of adapting GMB to incorporate differences in racial socialization during a community project seeking to understand key determinants of community violence transmission. An academic-community partnership planned a 1-day session with diverse stakeholders to explore the issue of violence using GMB. We documented key questions inspired by critical race theory (CRT) and adaptations to established GMB "scripts" (i.e., published facilitation instructions). The theory's emphasis on experiential knowledge led to a narrative-based facilitation guide from which participants created causal loop diagrams. These early diagrams depict how violence is transmitted and how communities respond, based on participants' lived experiences and mental models of causation that grew to include factors associated with race. Participants found these methods useful for advancing difficult discussion. The resulting diagrams can be tested and expanded in future research, and will form the foundation for collaborative identification of solutions to build community resilience. GMB is a promising strategy that community partnerships should consider when addressing complex health issues; our experience adapting methods based on CRT is promising in its acceptability and early system insights.

  19. Automatic and efficient methods applied to the binarization of a subway map

    Science.gov (United States)

    Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan

    2015-12-01

    The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.

  20. Applying the decision moving window to risky choice: Comparison of eye-tracking and mousetracing methods

    Directory of Open Access Journals (Sweden)

    Ana M. Franco-Watkins

    2011-12-01

    Full Text Available Currently, a disparity exists between the process-level models decision researchers use to describe and predict decision behavior and the methods implemented and metrics collected to test these models. The current work seeks to remedy this disparity by combining the advantages of work in decision research (mouse-tracing paradigms with contingent information display and cognitive psychology (eye-tracking paradigms from reading and scene perception. In particular, we introduce a new decision moving-window paradigm that presents stimulus information contingent on eye fixations. We provide data from the first application of this method to risky decision making, and show how it compares to basic eye-tracking and mouse-tracing methods. We also enumerate the practical, theoretical, and analytic advantages this method offers above and beyond both mouse-tracing with occlusion and basic eye tracking of information without occlusion. We include the use of new metrics that offer more precision than those typically calculated on mouse-tracing data as well as those not possible or feasible within the mouse-tracing paradigm.

  1. On Numerical Methods for Including the Effect of Capillary Pressure Forces on Two-phase, Immiscible Flow in a Layered Porous Medium

    Energy Technology Data Exchange (ETDEWEB)

    Ersland, B.G.

    1996-05-01

    This mathematical doctoral thesis contains the theory, algorithms and numerical simulations for a heterogeneous oil reservoir. It presents the equations, which apply to immiscible and incompressible two-phase fluid flow in the reservoir, including the effect of capillary pressure forces, and emphasises in particular the interior boundary conditions at the interface between two sediments. Two different approaches are discussed. The first approach is to decompose the computational domain along the interior boundary and iterate between the subdomains until mass balance is achieved. The second approach accounts for the interior boundary conditions in the basis in which the solution is expanded, the basis being discontinuous over the interior boundaries. An overview of the construction of iterative solvers for partial differential equations by means of Schwartz methods is given, and the algorithm for local refinement with Schwartz iterations as iterative solver is described. The theory is then applied to a core plug problem in one and two space dimensions and the results of different methods compared. A general description is given of the computer simulation model, which is implemented in C++. 64 refs., 49 figs., 7 tabs.

  2. Can spatial autocorrelation method be applied to arbitrary array shape; Kukan jiko sokanho no nin`i array eno tekiyo kanosei

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Iwamoto, K; Saito, T; Tachibana, M [Iwate University, Iwate (Japan). Faculty of Engineering

    1997-05-27

    Methods to learn underground structures by utilizing the dispersion phenomenon of surface waves contained in microtremors include the frequency-wave number analysis method (the F-K method) and the spatial autocorrelation method (the SAC method). Despite the fact that the SAC method is capable of exploring structures at greater depths, the method is not utilized because of its stringent restriction in arrangement of seismometers during observation that they must be arranged evenly on the same circumference. In order to eliminate this restriction in the SAC method, a research group in the Hokuriku University has proposed an expanded spatial autocorrelation (ESAC) method. Using the concept of the ESAC method as its base, a method was realized to improve phase velocity estimation by making a simulation on an array shifted to the radius direction. As a result of the discussion, it was found that the proposed improvement method can be applied to places where waves come from a number of directions, such as urban areas. If the improvement method can be applied, the spatial autocorrelation function needs not be even in the circumferential direction. In other words, the SAC method can be applied to arbitrary arrays. 1 ref., 7 figs.

  3. A 3D GIS METHOD APPLIED TO CATALOGING AND RESTORING: THE CASE OF AURELIAN WALLS AT ROME

    Directory of Open Access Journals (Sweden)

    M. Canciani

    2013-07-01

    Full Text Available The project involves architecture, archaeology, restoration, graphic documentation and computer imaging. The objective is development of a method for documentation of an architectural feature, based on a three-dimensional model obtained through laser scanning technologies, linked to a database developed in GIS environment. The case study concerns a short section of Rome's Aurelian walls, including the Porta Latina. The city walls are Rome's largest single architectural monument, subject to continuous deterioration, modification and maintenance since their original construction beginning in 271 AD. The documentation system provides a flexible, precise and easily-applied instrument for recording the full appearance, materials, stratification palimpsest and conservation status, in order to identify restoration criteria and intervention priorities, and to monitor and control the use and conservation of the walls over time. The project began with an analysis and documentation campaign integrating direct, traditional recording methods with indirect, topographic instrument and 3D laser scanning recording. These recording systems permitted development of a geographic information system based on three-dimensional modelling of separate, individual elements, linked to a database and related to the various stratigraphic horizons, the construction techniques, the component materials and their state of degradation. The investigations of the extant wall fabric were further compared to historic documentation, from both graphic and descriptive sources. The resulting model constitutes the core of the GIS system for this specific monument. The methodology is notable for its low cost, precision, practicality and thoroughness, and can be applied to the entire Aurelian wall and to other monuments.

  4. Advances in Spectral Nodal Methods applied to SN Nuclear Reactor Global calculations in Cartesian Geometry

    International Nuclear Information System (INIS)

    Barros, R.C.; Filho, H.A.; Oliveira, F.B.S.; Silva, F.C. da

    2004-01-01

    Presented here are the advances in spectral nodal methods for discrete ordinates (SN) eigenvalue problems in Cartesian geometry. These coarse-mesh methods are based on three ingredients: (i) the use of the standard discretized spatial balance SN equations; (ii) the use of the non-standard spectral diamond (SD) auxiliary equations in the multiplying regions of the domain, e.g. fuel assemblies; and (iii) the use of the non-standard spectral Green's function (SGF) auxiliary equations in the non-multiplying regions of the domain, e.g., the reflector. In slab-geometry the hybrid SD-SGF method generates numerical results that are completely free of spatial truncation errors. In X,Y-geometry, we obtain a system of two 'slab-geometry' SN equations for the node-edge average angular fluxes by transverse-integrating the X,Y-geometry SN equations separately in the y- and then in the x-directions within an arbitrary node of the spatial grid set up on the domain. In this paper, we approximate the transverse leakage terms by constants. These are the only approximations considered in the SD-SGF-constant nodal method, as the source terms, that include scattering and eventually fission events, are treated exactly. Moreover, we describe in this paper the progress of the approximate SN albedo boundary conditions for substituting the non-multiplying regions around the nuclear reactor core. We show numerical results to typical model problems to illustrate the accuracy of spectral nodal methods for coarse-mesh SN criticality calculations. (Author)

  5. Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations

    International Nuclear Information System (INIS)

    Arimescu, V.E.; Heins, L.

    2001-01-01

    method, which is computationally efficient, is presented for the evaluation of the global statement. It is proved that, r, the expected fraction of fuel rods exceeding a certain limit is equal to the (1-r)-quantile of the overall distribution of all possible values from all fuel rods. In this way, the problem is reduced to that of estimating a certain quantile of the overall distribution, and the same techniques used for a single rod distribution can be applied again. A simplified test case was devised to verify and validate the methodology. The fuel code was replaced by a transfer function dependent on two input parameters. The function was chosen so that analytic results could be obtained for the distribution of the output. This offers a direct validation for the statistical procedure. Also, a sensitivity study has been performed to analyze the effect on the final outcome of the sampling procedure, simple Monte Carlo and Latin Hypercube Sampling. Also, the effect on the accuracy and bias of the statistical results due to the size of the sample was studied and the conclusion was reached that the results of the statistical methodology are typically conservative. In the end, an example of applying these statistical techniques to a PWR reload is presented together with the improvements and new insights the statistical methodology brings to fuel rod design calculations. (author)

  6. Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience

    Science.gov (United States)

    Hooper, R.P.

    2001-01-01

    A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.

  7. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    Directory of Open Access Journals (Sweden)

    D.D. Lestiani

    2011-08-01

    Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.

  8. New methods applied to the analysis and treatment of ovarian cancer

    International Nuclear Information System (INIS)

    Order, S.E.; Rosenshein, N.B.; Klein, J.L.; Lichter, A.S.; Ettinger, D.S.; Dillon, M.B.; Leibel, S.A.

    1979-01-01

    The development of rigorous staging methods, appreciation of new knowledge concerning ovarian cancer dissemination, and administration of new treatment techniques have been applied to ovarian cancer. The method of staging consists of peritoneal cytology, total abdominal hysterectomy-bilateral salpingo oophorectomy (TAH-BSO), omentectomy, nodal biopsy, diaphragmatic inspection and is coupled with maximal surgical resection. An additional examination being evaluated for usefulness in future staging is intraperitoneal /sup 99m/Tc sulfur colloid scans. Nineteen patients have entered the pilot studies. Sixteen patients (5 Stage 2, 10 Stage 3 micrometastatic, and 1 Stage 4) have been treated with colloidal 32 P, i.p. followed 2 weeks later by split abdominal irradiation (200 rad fractions pelvis-2 hr rest-150 rad upper abdomen) to a total abdominal dose of 3000 rad with a pelvic cone down to 4000 rad. Five of these patients received Phenylalanine mustard (L-PAM) (7 mg/m 2 ) maintenance therapy. The 3 year actuarial survival was 78% and the 3 year disease free actuarial survival 68%. Seven patients were treated with intraperitoneal tumor antisera and 4/7 remain in complete remission as of this writing. The specificity of the antiserum has been demonstrated by immunoelectrophoresis in 4/4 patients, and by live cell fluorescence in 1 patient. Rabbit IgG levels revealed significant increasing titers in 4/6 patients following i.p. antiovarian antiserum. Radiolabeled IgG derived from the antiserum demonstrated tumor localization and correlation with conventional radiograhy and computerized axial tomograhy (CAT) scans in 2 patients studied to date. Biomarker analysis reveals that free secretory protein 6/6, apha globulin 5/6, and CEA (carcinoembryonic antigen) 3/6 were elevated in the 6 patients studied. Two patients whose disease progressed demonstrated elevated levels of all three biomarkers

  9. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    International Nuclear Information System (INIS)

    Lestiani, D.D.; Santoso, M.

    2011-01-01

    Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA) and particles induced X-ray emission (PIXE). Particle samples in the PM 2.5 and PM 2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preferred, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment. (author)

  10. Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-08-01

    Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.

  11. Holographic method coupled with an optoelectronic interface applied in the ionizing radiation dosimetry

    International Nuclear Information System (INIS)

    Nicolau-Rebigan, S.; Sporea, D.; Niculescu, V.I.R.

    2000-01-01

    The paper presents a holographic method applied in the ionizing radiation dosimetry. It is possible to use two types of holographic interferometry like as double exposure holographic interferometry, or fast real time holographic interferometry. In this paper the applications of holographic interferometry to ionizing radiation dosimetry are presented. The determination of the accurate value of dose delivered by an ionizing radiation source (released energy per mass unit) is a complex problem which imposes different solutions depending on experimental parameters and it is solved with a double exposure holographic interferometric method associated with an optoelectronic interface and Z80 microprocessor. The method can determine the absorbed integral dose as well as the three-dimensional distribution of dose in given volume. The paper presents some results obtained in radiation dosimetry. Original mathematical relations for integral absorbed dose in irreversible radiolyzing liquids where derived. Irradiation effects can be estimated from the holographic fringes displacement and density. To measure these parameters, the obtained holographic interferograms were picked-up by a closed TV circuit system in such a way that a selected TV line explores the picture along the direction of interest using a special designed interface, a Z80 and our microprocessor system captures data along the selected TV line. When the integral dose is to be measured the microprocessor computes it from the information contained in the fringes distribution, according to the proposed formulae. Integral absorbed dose and spatial dose distribution can be estimated with an accuracy better than 4%. Some advantages of this method are outlined comparatively with conventional method in radiation dosimetry. The paper presents an original holographic set-up with an electronic interface, assisted by a Z80 microprocessor and used for nondestructive testing of transparent objects at the laser wave length

  12. A new method of identifying target groups for pronatalist policy applied to Australia.

    Directory of Open Access Journals (Sweden)

    Mengni Chen

    Full Text Available A country's total fertility rate (TFR depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.

  13. Multicriterial Hierarchy Methods Applied in Consumption Demand Analysis. The Case of Romania

    Directory of Open Access Journals (Sweden)

    Constantin Bob

    2008-03-01

    Full Text Available The basic information for computing the quantitative statistical indicators, that characterize the demand of industrial products and services are collected by the national statistics organizations, through a series of statistical surveys (most of them periodical and partial. The source for data we used in the present paper is an statistical investigation organized by the National Institute of Statistics, "Family budgets survey" that allows to collect information regarding the households composition, income, expenditure, consumption and other aspects of population living standard. In 2005, in Romania, a person spent monthly in average 391,2 RON, meaning about 115,1 Euros for purchasing the consumed food products and beverage, as well as non-foods products, services, investments and other taxes. 23% of this sum was spent for purchasing the consumed food products and beverages, 21.6% of the total sum was spent for purchasing non-food goods and 18,1%  for payment of different services. There is a discrepancy between the different development regions in Romania, regarding total households expenditure composition. For this reason, in the present paper we applied statistical methods for ranking the various development regions in Romania, using the share of householdsí expenditure on categories of products and services as ranking criteria.

  14. Bending stress modeling of dismountable furniture joints applied with a use of finite element method

    Directory of Open Access Journals (Sweden)

    Milan Šimek

    2009-01-01

    Full Text Available Presented work focuses on bending moment stress modeling of dismountable furniture joints with a use of Finite Element Method. The joints are created from Minifix and Rondorfix cams combined with non-glued wooden dowels. Laminated particleboard 18 mm of thickness is used as a connected material. The connectors were chosen such as the most applied kind in furniture industry for the case furniture. All gained results were reciprocally compared to each other and also in comparison to experimental testing by the mean of stiffness. The non-linear numerical model of chosen joints was successfully created using the software Ansys Workbench. The detailed analysis of stress distribution in the joint was achieved with non-linear numerical simulation. A relationship between numerical si­mu­la­tion and experimental testing was showed by comparison stiffness tangents. A numerical simulation of RTA joint loads also demonstrated the important role of non-glued dowels in the tested joints. The low strength of particleboard in the tension parallel to surface (internal bond is the most likely the cause of the joint failure. Results are applicable for strength designing of furniture with the aid of Computer Aided Engineering.

  15. Commissioning methods applied to the Hunterston 'B' AGR operator training simulator

    International Nuclear Information System (INIS)

    Hacking, D.

    1985-01-01

    The Hunterston 'B' full scope AGR Simulator, built for the South of Scotland Electricity Board by Marconi Instruments, encompasses all systems under direct and indirect control of the Hunterston central control room operators. The resulting breadth and depth of simulation together with the specification for the real time implementation of a large number of highly interactive detailed plant models leads to the classic problem of identifying acceptance and acceptability criteria. For example, whilst the ultimate criterion for acceptability must clearly be that within the context of the training requirement the simulator should be indistinguishable from the actual plant, far more measurable (i.e. less subjective) statements are required if a formal contractual acceptance condition is to be achieved. Within the framework, individual models and processes can have radically different acceptance requirements which therefore reflect on the commissioning approach applied. This paper discusses the application of a combination of quality assurance methods, design code results, plant data, theoretical analysis and operator 'feel' in the commissioning of the Hunterston 'B' AGR Operator Training Simulator. (author)

  16. AO–MW–PLS method applied to rapid quantification of teicoplanin with near-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Jiemei Chen

    2017-01-01

    Full Text Available Teicoplanin (TCP is an important lipoglycopeptide antibiotic produced by fermenting Actinoplanes teichomyceticus. The change in TCP concentration is important to measure in the fermentation process. In this study, a reagent-free and rapid quantification method for TCP in the TCP–Tris–HCl mixture samples was developed using near-infrared (NIR spectroscopy by focusing our attention on the fermentation process for TCP. The absorbance optimization (AO partial least squares (PLS was proposed and integrated with the moving window (MW PLS, which is called AO–MW–PLS method, to select appropriate wavebands. A model set that includes various wavebands that were equivalent to the optimal AO–MW–PLS waveband was proposed based on statistical considerations. The public region of all equivalent wavebands was just one of the equivalent wavebands. The obtained public regions were 1540–1868nm for TCP and 1114–1310nm for Tris. The root-mean-square error and correlation coefficient for leave-one-out cross validation were 0.046mg mL−1 and 0.9998mg mL−1 for TCP, and 0.235mg mL−1 and 0.9986mg mL−1 for Tris, respectively. All the models achieved highly accurate prediction effects, and the selected wavebands provided valuable references for designing specialized spectrometers. This study provided a valuable reference for further application of the proposed methods to TCP fermentation broth and to other spectroscopic analysis fields.

  17. Applying quantitative benefit-risk analysis to aid regulatory decision making in diagnostic imaging: methods, challenges, and opportunities.

    Science.gov (United States)

    Agapova, Maria; Devine, Emily Beth; Bresnahan, Brian W; Higashi, Mitchell K; Garrison, Louis P

    2014-09-01

    Health agencies making regulatory marketing-authorization decisions use qualitative and quantitative approaches to assess expected benefits and expected risks associated with medical interventions. There is, however, no universal standard approach that regulatory agencies consistently use to conduct benefit-risk assessment (BRA) for pharmaceuticals or medical devices, including for imaging technologies. Economics, health services research, and health outcomes research use quantitative approaches to elicit preferences of stakeholders, identify priorities, and model health conditions and health intervention effects. Challenges to BRA in medical devices are outlined, highlighting additional barriers in radiology. Three quantitative methods--multi-criteria decision analysis, health outcomes modeling and stated-choice survey--are assessed using criteria that are important in balancing benefits and risks of medical devices and imaging technologies. To be useful in regulatory BRA, quantitative methods need to: aggregate multiple benefits and risks, incorporate qualitative considerations, account for uncertainty, and make clear whose preferences/priorities are being used. Each quantitative method performs differently across these criteria and little is known about how BRA estimates and conclusions vary by approach. While no specific quantitative method is likely to be the strongest in all of the important areas, quantitative methods may have a place in BRA of medical devices and radiology. Quantitative BRA approaches have been more widely applied in medicines, with fewer BRAs in devices. Despite substantial differences in characteristics of pharmaceuticals and devices, BRA methods may be as applicable to medical devices and imaging technologies as they are to pharmaceuticals. Further research to guide the development and selection of quantitative BRA methods for medical devices and imaging technologies is needed. Copyright © 2014 AUR. Published by Elsevier Inc. All rights

  18. Life Expectancies Applied to Specific Statuses: a History of the Indicators and the Methods of Calculation {Population, 3, 1998)

    OpenAIRE

    N. Brouard; J.-M. Robine; E. Cambois

    1999-01-01

    Cambois (Emmanuelle), Robin? (Jean-Marie), Brouard (Nicolas).- Life Expectancies Applied to Specific Statuses: A History of the Indicators and the Methods of Calculation Indicators of life expectancy applied to specific statuses, such as the state of health or professional status, were introduced at the end of the 1930s and are currently the object of renewed interest. Because they relate mortality to different domains (health, professional activity) applied life expectancies reflect simultan...

  19. Applying polarity rapid assessment method and ultrafiltration to characterize NDMA precursors in wastewater effluents.

    Science.gov (United States)

    Chen, Chao; Leavey, Shannon; Krasner, Stuart W; Mel Suffet, I H

    2014-06-15

    Certain nitrosamines in water are disinfection byproducts that are probable human carcinogens. Nitrosamines have diverse and complex precursors that include effluent organic matter, some anthropogenic chemicals, and natural (likely non-humic) substances. An easy and selective tool was first developed to characterize nitrosamine precursors in treated wastewaters, including different process effluents. This tool takes advantages of the polarity rapid assessment method (PRAM) and ultrafiltration (UF) (molecular weight distribution) to locate the fractions with the strongest contributions to the nitrosamine precursor pool in the effluent organic matter. Strong cation exchange (SCX) and C18 solid-phase extraction cartridges were used for their high selectivity for nitrosamine precursors. The details of PRAM operation, such as cartridge clean-up, capacity, pH influence, and quality control were included in this paper, as well as the main parameters of UF operation. Preliminary testing of the PRAM/UF method with effluents from one wastewater treatment plant gave very informative results. SCX retained 45-90% of the N-nitrosodimethylamine (NDMA) formation potential (FP)-a measure of the precursors-in secondary and tertiary wastewater effluents. These results are consistent with NDMA precursors likely having a positively charged amine group. C18 adsorbed 30-45% of the NDMAFP, which indicates that a substantial portion of these precursors were non-polar. The small molecular weight (MW) (10 kDa) fractions obtained from UF were the primary contributors to NDMAFP. The combination of PRAM and UF brings important information on the characteristics of nitrosamine precursors in water with easy operation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  1. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    Science.gov (United States)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  2. Applying social science and public health methods to community-based pandemic planning.

    Science.gov (United States)

    Danforth, Elizabeth J; Doying, Annette; Merceron, Georges; Kennedy, Laura

    2010-11-01

    Pandemic influenza is a unique threat to communities, affecting schools, businesses, health facilities and individuals in ways not seen in other emergency events. This paper aims to outline a local government project which utilised public health and social science research methods to facilitate the creation of an emergency response plan for pandemic influenza coincidental to the early stages of the 2009 H1N1 ('swine flu') outbreak. A multi-disciplinary team coordinated the creation of a pandemic influenza emergency response plan which utilised emergency planning structure and concepts and encompassed a diverse array of county entities including schools, businesses, community organisations, government agencies and healthcare facilities. Lessons learned from this project focus on the need for (1) maintaining relationships forged during the planning process, (2) targeted public health messaging, (3) continual evolution of emergency plans, (4) mutual understanding of emergency management concepts by business and community leaders, and (5) regional coordination with entities outside county boundaries.

  3. Analysis of flow boiling heat transfer in narrow annular gaps applying the design of experiments method

    Directory of Open Access Journals (Sweden)

    Gunar Boye

    2015-06-01

    Full Text Available The axial heat transfer coefficient during flow boiling of n-hexane was measured using infrared thermography to determine the axial wall temperature in three geometrically similar annular gaps with different widths (s = 1.5 mm, s = 1 mm, s = 0.5 mm. During the design and evaluation process, the methods of statistical experimental design were applied. The following factors/parameters were varied: the heat flux q · = 30 − 190 kW / m 2 , the mass flux m · = 30 − 700 kg / m 2 s , the vapor quality x · = 0 . 2 − 0 . 7 , and the subcooled inlet temperature T U = 20 − 60 K . The test sections with gap widths of s = 1.5 mm and s = 1 mm had very similar heat transfer characteristics. The heat transfer coefficient increases significantly in the range of subcooled boiling, and after reaching a maximum at the transition to the saturated flow boiling, it drops almost monotonically with increasing vapor quality. With a gap width of 0.5 mm, however, the heat transfer coefficient in the range of saturated flow boiling first has a downward trend and then increases at higher vapor qualities. For each test section, two correlations between the heat transfer coefficient and the operating parameters have been created. The comparison also shows a clear trend of an increasing heat transfer coefficient with increasing heat flux for test sections s = 1.5 mm and s = 1.0 mm, but with increasing vapor quality, this trend is reversed for test section 0.5 mm.

  4. A new method of identifying target groups for pronatalist policy applied to Australia

    Science.gov (United States)

    Chen, Mengni; Lloyd, Chris J.

    2018-01-01

    A country’s total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup’s potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies. PMID:29425220

  5. Proposal of inspection method of radiation protection applied to nuclear medicine establishments

    International Nuclear Information System (INIS)

    Mendes, Leopoldino da Cruz Gouveia

    2003-01-01

    The principal objective of this paper is to implement a method of an impartial and efficient inspection, due to a correct and secure dose of ionizing radiation in the field of Nuclear Medicine. The Radiological Protection Model was tested in 113 Nuclear Medicine Services all over the country, according to a biannual analysis frequency (1996, 1998, 2000 and 2002). The data sheet comprised general information about the structure of the NMS and a technical approach. In the analytical process, a methodology of inputting different importance levels to each of the 82 features was adopted, based on the risk factors stated in the CNEN NE's and in the IAEA recommendations, as well. From this point of view, as a feature does not fit one of the rules above, it will correspond to a radioprotection fault and be imparted a grade. The sum of those grades, classified the NMS in one of the three different ranges, as follows: - operating without restriction - 100 points and below- operating with restriction - between 100 and 300 points - temporary shutdown - above and equal to 300 points. The allowance of the second group to carry on operating should be attached to a defined and restricted period of time (six to twelve months), supposed large enough to the NMS solving the problems being new evaluation proceeded then. The NMS's classified in the third group are supposed to go back into operation only when fit all the pending radioprotection requirements. Until the next regular evaluation, meanwhile a multiplication factor 2 n was applied to the recalcitrant NMS s where n is the number of unwilling occurrences. The previous establishment of those items of radioprotection, with its respective grade, excluded subjective and personal values in the judgement and technical evaluation of the institutions. (author)

  6. Feasibility study of applying reactor oscillator phase method at the RB reactor; Razmatranje mogucnosti primene fazne metode reaktorskog oscilatora na reaktoru RB

    Energy Technology Data Exchange (ETDEWEB)

    Petrovic, M; Kocic, A; Markovic, V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1965-11-15

    This paper decsribes the principles of amplitude and phase methods for applying reactor oscillator; experimental procedure and choice of optimum parameters for usractor oscillator at the RB reactor, dependent on the values of absorption properties of moderator and construction materials. Short description of the oscillator and the electronic equipment is included.

  7. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  8. The GMD Method for Inductance Calculation Applied to Conductors with Skin Effect

    Directory of Open Access Journals (Sweden)

    H. A. Aebischer

    2017-09-01

    Full Text Available The GMD method (geometric mean distance to calculate inductance offers undoubted advantages over other methods. But so far it seemed to be limited to the case where the current is uniformly distributed over the cross section of the conductor, i.e. to DC (direct current. In this paper, the definition of the GMD is extended to include cases of nonuniform distribution observed at higher frequencies as the result of skin effect. An exact relation between the GMD and the internal inductance per unit length for infinitely long conductors of circularly symmetric cross section is derived. It enables much simpler derivations of Maxwell’s analytical expressions for the GMD of circular and annular disks than were known before. Its salient application, however, is the derivation of exact expressions for the GMD of infinitely long round wires and tubular conductors with skin effect. These expressions are then used to verify the consistency of the extended definition of the GMD. Further, approximate formulae for the GMD of round wires with skin effect based on elementary functions are discussed. Total inductances calculated with the help of the derived formulae for the GMD with and without skin effect are compared to measurement results from the literature. For conductors of square cross section, an analytical approximation for the GMD with skin effect based on elementary functions is presented. It is shown that it allows to calculate the total inductance of such conductors for frequencies from DC up to 25 GHz to a precision of better than 1 %.

  9. Risk-based analysis methods applied to nuclear power plant technical specifications

    International Nuclear Information System (INIS)

    Wagner, D.P.; Minton, L.A.; Gaertner, J.P.

    1989-01-01

    A computer-aided methodology and practical applications of risk-based evaluation of technical specifications are described. The methodology, developed for use by the utility industry, is a part of the overall process of improving nuclear power plant technical specifications. The SOCRATES computer program uses the results of a probabilistic risk assessment or a system-level risk analysis to calculate changes in risk due to changes in the surveillance test interval and/or the allowed outage time stated in the technical specification. The computer program can accommodate various testing strategies (such as staggered or simultaneous testing) to allow modeling of component testing as it is carried out at the plant. The methods and computer program are an integral part of a larger decision process aimed at determining benefits from technical specification changes. These benefits can include cost savings to the utilities by reducing forced shutdowns and decreasing labor requirements for test and maintenance activities, with no adverse impacts on risk. The methodology and the SOCRATES computer program have been used extensively toe valuate several actual technical specifications in case studies demonstrating the methods. Summaries of these applications demonstrate the types of results achieved and the usefulness of the risk-based evaluation in improving the technical specifications

  10. Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain

    KAUST Repository

    Belkhatir, Zehor

    2018-05-01

    Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a

  11. Difficulties in applying pure Kohn-Sham density functional theory electronic structure methods to protein molecules

    Science.gov (United States)

    Rudberg, Elias

    2012-02-01

    Self-consistency-based Kohn-Sham density functional theory (KS-DFT) electronic structure calculations with Gaussian basis sets are reported for a set of 17 protein-like molecules with geometries obtained from the Protein Data Bank. It is found that in many cases such calculations do not converge due to vanishing HOMO-LUMO gaps. A sequence of polyproline I helix molecules is also studied and it is found that self-consistency calculations using pure functionals fail to converge for helices longer than six proline units. Since the computed gap is strongly correlated to the fraction of Hartree-Fock exchange, test calculations using both pure and hybrid density functionals are reported. The tested methods include the pure functionals BLYP, PBE and LDA, as well as Hartree-Fock and the hybrid functionals BHandHLYP, B3LYP and PBE0. The effect of including solvent molecules in the calculations is studied, and it is found that the inclusion of explicit solvent molecules around the protein fragment in many cases gives a larger gap, but that convergence problems due to vanishing gaps still occur in calculations with pure functionals. In order to achieve converged results, some modeling of the charge distribution of solvent water molecules outside the electronic structure calculation is needed. Representing solvent water molecules by a simple point charge distribution is found to give non-vanishing HOMO-LUMO gaps for the tested protein-like systems also for pure functionals.

  12. Difficulties in applying pure Kohn-Sham density functional theory electronic structure methods to protein molecules

    International Nuclear Information System (INIS)

    Rudberg, Elias

    2012-01-01

    Self-consistency-based Kohn-Sham density functional theory (KS-DFT) electronic structure calculations with Gaussian basis sets are reported for a set of 17 protein-like molecules with geometries obtained from the Protein Data Bank. It is found that in many cases such calculations do not converge due to vanishing HOMO-LUMO gaps. A sequence of polyproline I helix molecules is also studied and it is found that self-consistency calculations using pure functionals fail to converge for helices longer than six proline units. Since the computed gap is strongly correlated to the fraction of Hartree-Fock exchange, test calculations using both pure and hybrid density functionals are reported. The tested methods include the pure functionals BLYP, PBE and LDA, as well as Hartree-Fock and the hybrid functionals BHandHLYP, B3LYP and PBE0. The effect of including solvent molecules in the calculations is studied, and it is found that the inclusion of explicit solvent molecules around the protein fragment in many cases gives a larger gap, but that convergence problems due to vanishing gaps still occur in calculations with pure functionals. In order to achieve converged results, some modeling of the charge distribution of solvent water molecules outside the electronic structure calculation is needed. Representing solvent water molecules by a simple point charge distribution is found to give non-vanishing HOMO-LUMO gaps for the tested protein-like systems also for pure functionals. (fast track communication)

  13. Costs of Rabies Control: An Economic Calculation Method Applied to Flores Island

    Science.gov (United States)

    Wera, Ewaldus; Velthuis, Annet G. J.; Geong, Maria; Hogeveen, Henk

    2013-01-01

    Background Rabies is a zoonotic disease that, in most human cases, is fatal once clinical signs appear. The disease transmits to humans through an animal bite. Dogs are the main vector of rabies in humans on Flores Island, Indonesia, resulting in about 19 human deaths each year. Currently, rabies control measures on Flores Island include mass vaccination and culling of dogs, laboratory diagnostics of suspected rabid dogs, putting imported dogs in quarantine, and pre- and post-exposure treatment (PET) of humans. The objective of this study was to estimate the costs of the applied rabies control measures on Flores Island. Methodology/principal findings A deterministic economic model was developed to calculate the costs of the rabies control measures and their individual cost components from 2000 to 2011. The inputs for the economic model were obtained from (i) relevant literature, (ii) available data on Flores Island, and (iii) experts such as responsible policy makers and veterinarians involved in rabies control measures in the past. As a result, the total costs of rabies control measures were estimated to be US$1.12 million (range: US$0.60–1.47 million) per year. The costs of culling roaming dogs were the highest portion, about 39 percent of the total costs, followed by PET (35 percent), mass vaccination (24 percent), pre-exposure treatment (1.4 percent), and others (1.3 percent) (dog-bite investigation, diagnostic of suspected rabid dogs, trace-back investigation of human contact with rabid dogs, and quarantine of imported dogs). Conclusions/significance This study demonstrates that rabies has a large economic impact on the government and dog owners. Control of rabies by culling dogs is relatively costly for the dog owners in comparison with other measures. Providing PET for humans is an effective way to prevent rabies, but is costly for government and does not provide a permanent solution to rabies in the future. PMID:24386244

  14. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  15. Solution and study of nodal neutron transport equation applying the LTS{sub N}-DiagExp method

    Energy Technology Data Exchange (ETDEWEB)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta [Pontificia Univ. Catolica do Rio Grande do Sul, Porto Alegre, RS (Brazil). Faculdade de Matematica]. E-mail: eliete@pucrs.br; rpp@mat.pucrs.br; Vilhena, Marco Tullio de [Pontificia Univ. Catolica do Rio Grande do Sul, Porto Alegre, RS (Brazil). Instituto de Matematica]. E-mail: vilhena@mat.ufrgs.br; Barros, Ricardo Carvalho de [Universidade do Estado, Nova Friburgo, RJ (Brazil). Instituto Politecnico]. E-mail: ricardo@iprj.uerj.br

    2003-07-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S{sub N} equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS{sub N} method, first applying the Laplace transform to the set of the nodal S{sub N} equations and then obtained the solution by symbolic computation. We include the LTS{sub N} method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS{sub N} approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  16. Assessing the Applicability of Currently Available Methods for Attributing Foodborne Disease to Sources, Including Food and Food Commodities

    DEFF Research Database (Denmark)

    Pires, Sara Monteiro

    2013-01-01

    on the public health question being addressed, on the data requirements, on advantages and limitations of the method, and on the data availability of the country or region in question. Previous articles have described available methods for source attribution, but have focused only on foodborne microbiological...

  17. Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling

    Science.gov (United States)

    Fink, P. W.; Wilton, D. R.; Dobbins, J. A.

    2002-01-01

    In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required

  18. Four Methods for Completing the Conceptual Development Phase of Applied Theory Building Research in HRD

    Science.gov (United States)

    Storberg-Walker, Julia; Chermack, Thomas J.

    2007-01-01

    The purpose of this article is to describe four methods for completing the conceptual development phase of theory building research for single or multiparadigm research. The four methods selected for this review are (1) Weick's method of "theorizing as disciplined imagination" (1989); (2) Whetten's method of "modeling as theorizing" (2002); (3)…

  19. Mofettes - Investigation of Natural CO2 Springs - Insights and Methods applied

    Science.gov (United States)

    Lübben, A.; Leven, C.

    2014-12-01

    The quantification of carbon dioxide concentrations and fluxes leaking from the subsurface into the atmosphere is highly relevant in several research fields such as climate change, CCS, volcanic activity, or earthquake monitoring. Many of the areas with elevated carbon dioxide degassing pose the problem that under the given situation a systematic investigation of the relevant processes is only possible to a limited extent (e.g. in terms of spatial extent, accessibility, hazardous conditions). The upper Neckar valley in Southwest Germany is a region of enhanced natural subsurface CO2 concentrations and mass fluxes of Tertiary volcanic origin. At the beginning of the twentieth century several companies started industrial mining of CO2. The decreasing productivity of the CO2 springs led to the complete shutdown of the industry in 1995 and the existing boreholes were sealed. However, there are evidences that the reservoir, located in the deposits of the Lower Triassic, started to refill during the last 20 years. The CO2 springs replenished and a variety of different phenomena (e.g. mofettes and perished flora and fauna) indicate the active process of large scale CO2 exhalation. This easy-to-access site serves as a perfect example for a natural analog to a leaky CCS site, including abandoned boreholes and a suitable porous rock reservoir in the subsurface. During extensive field campaigns we applied several monitoring techniques like measurements of soil gas concentrations, mass fluxes, electrical resistivity, as well as soil and atmospheric parameters. The aim was to investigate and quantify mass fluxes and the effect of variations in e.g. temperature, soil moisture on the mass flux intensity. Furthermore, we investigated the effect of the vicinity to a mofette on soil parameters like electrical conductivity and soil CO2 concentrations. In times of a changing climate due to greenhouse gases, regions featuring natural CO2 springs demand to be intensively investigated

  20. Compositions of graphene materials with metal nanostructures and microstructures and methods of making and using including pressure sensors

    KAUST Repository

    Chen, Ye; Khashab, Niveen M.; Tao, Jing

    2017-01-01

    Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets

  1. Applied Ecosystem Analysis - Background EDT - The Ecosystem Diagnosis and Treatment Method

    International Nuclear Information System (INIS)

    Mobrand, L.E.; Lichatowich, J.A.; Howard, D.A.; Vogel, T.S.

    1996-05-01

    This volume consists of eight separate reports. We present them as background to the Ecosystem Diagnosis and Treatment (EDT) methodology. They are a selection from publications, white papers, and presentations prepared over the past two years. Some of the papers are previously published, others are currently being prepared for publication. In the early to mid 1980's the concern for failure of both natural and hatchery production of Columbia river salmon populations was widespread. The concept of supplementation was proposed as an alternative solution that would integrate artificial propagation with natural production. In response to the growing expectations placed upon the supplementation tool, a project called Regional Assessment of Supplementation Project (RASP) was initiated in 1990. The charge of RASP was to define supplementation and to develop guidelines for when, where and how it would be the appropriate solution to salmon enhancement in the Columbia basin. The RASP developed a definition of supplementation and a set of guidelines for planning salmon enhancement efforts which required consideration of all factors affecting salmon populations, including environmental, genetic, and ecological variables. The results of RASP led to a conclusion that salmon issues needed to be addressed in a manner that was consistent with an ecosystem approach. If the limitations and potentials of supplementation or any other management tool were to be fully understood it would have to be within the context of a broadly integrated approach - thus the Ecosystem Diagnosis and Treatment (EDT) method was born

  2. Manual muscle testing: a method of measuring extremity muscle strength applied to critically ill patients.

    Science.gov (United States)

    Ciesla, Nancy; Dinglas, Victor; Fan, Eddy; Kho, Michelle; Kuramoto, Jill; Needham, Dale

    2011-04-12

    Survivors of acute respiratory distress syndrome (ARDS) and other causes of critical illness often have generalized weakness, reduced exercise tolerance, and persistent nerve and muscle impairments after hospital discharge. Using an explicit protocol with a structured approach to training and quality assurance of research staff, manual muscle testing (MMT) is a highly reliable method for assessing strength, using a standardized clinical examination, for patients following ARDS, and can be completed with mechanically ventilated patients who can tolerate sitting upright in bed and are able to follow two-step commands. (7, 8) This video demonstrates a protocol for MMT, which has been taught to ≥ 43 research staff who have performed >800 assessments on >280 ARDS survivors. Modifications for the bedridden patient are included. Each muscle is tested with specific techniques for positioning, stabilization, resistance, and palpation for each score of the 6-point ordinal Medical Research Council scale. Three upper and three lower extremity muscles are graded in this protocol: shoulder abduction, elbow flexion, wrist extension, hip flexion, knee extension, and ankle dorsiflexion. These muscles were chosen based on the standard approach for evaluating patients for ICU-acquired weakness used in prior publications. (1,2).

  3. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.; Henry, G.

    1999-01-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  4. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyaya, B.R.; Yan, W. [Tennessee Univ., Knoxville, TN (United States). Dept. of Nuclear Engineering; Behravesh, M.M. [Electric Power Research Institute, Palo Alto, CA (United States); Henry, G. [EPRI NDE Center, Charlotte, NC (United States)

    1999-09-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  5. Applying Formal Methods to NASA Projects: Transition from Research to Practice

    Science.gov (United States)

    Othon, Bill

    2009-01-01

    NASA project managers attempt to manage risk by relying on mature, well-understood process and technology when designing spacecraft. In the case of crewed systems, the margin for error is even tighter and leads to risk aversion. But as we look to future missions to the Moon and Mars, the complexity of the systems will increase as the spacecraft and crew work together with less reliance on Earth-based support. NASA will be forced to look for new ways to do business. Formal methods technologies can help NASA develop complex but cost effective spacecraft in many domains, including requirements and design, software development and inspection, and verification and validation of vehicle subsystems. To realize these gains, the technologies must be matured and field-tested so that they are proven when needed. During this discussion, current activities used to evaluate FM technologies for Orion spacecraft design will be reviewed. Also, suggestions will be made to demonstrate value to current designers, and mature the technology for eventual use in safety-critical NASA missions.

  6. Applied Ecosystem Analysis - Background : EDT the Ecosystem Diagnosis and Treatment Method.

    Energy Technology Data Exchange (ETDEWEB)

    Mobrand, Lars E.

    1996-05-01

    This volume consists of eight separate reports. We present them as background to the Ecosystem Diagnosis and Treatment (EDT) methodology. They are a selection from publications, white papers, and presentations prepared over the past two years. Some of the papers are previously published, others are currently being prepared for publication. In the early to mid 1980`s the concern for failure of both natural and hatchery production of Columbia river salmon populations was widespread. The concept of supplementation was proposed as an alternative solution that would integrate artificial propagation with natural production. In response to the growing expectations placed upon the supplementation tool, a project called Regional Assessment of Supplementation Project (RASP) was initiated in 1990. The charge of RASP was to define supplementation and to develop guidelines for when, where and how it would be the appropriate solution to salmon enhancement in the Columbia basin. The RASP developed a definition of supplementation and a set of guidelines for planning salmon enhancement efforts which required consideration of all factors affecting salmon populations, including environmental, genetic, and ecological variables. The results of RASP led to a conclusion that salmon issues needed to be addressed in a manner that was consistent with an ecosystem approach. If the limitations and potentials of supplementation or any other management tool were to be fully understood it would have to be within the context of a broadly integrated approach - thus the Ecosystem Diagnosis and Treatment (EDT) method was born.

  7. A Comparison of Vibration and Oil Debris Gear Damage Detection Methods Applied to Pitting Damage

    Science.gov (United States)

    Dempsey, Paula J.

    2000-01-01

    Helicopter Health Usage Monitoring Systems (HUMS) must provide reliable, real-time performance monitoring of helicopter operating parameters to prevent damage of flight critical components. Helicopter transmission diagnostics are an important part of a helicopter HUMS. In order to improve the reliability of transmission diagnostics, many researchers propose combining two technologies, vibration and oil monitoring, using data fusion and intelligent systems. Some benefits of combining multiple sensors to make decisions include improved detection capabilities and increased probability the event is detected. However, if the sensors are inaccurate, or the features extracted from the sensors are poor predictors of transmission health, integration of these sensors will decrease the accuracy of damage prediction. For this reason, one must verify the individual integrity of vibration and oil analysis methods prior to integrating the two technologies. This research focuses on comparing the capability of two vibration algorithms, FM4 and NA4, and a commercially available on-line oil debris monitor to detect pitting damage on spur gears in the NASA Glenn Research Center Spur Gear Fatigue Test Rig. Results from this research indicate that the rate of change of debris mass measured by the oil debris monitor is comparable to the vibration algorithms in detecting gear pitting damage.

  8. Derivatization method of free cyanide including cyanogen chloride for the sensitive analysis of cyanide in chlorinated drinking water by liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Kang, Hye-In; Shin, Ho-Sang

    2015-01-20

    A novel derivatization method of free cyanide (HCN + CN(-)) including cyanogen chloride in chlorinated drinking water was developed with d-cysteine and hypochlorite. The optimum conditions (0.5 mM D-cysteine, 0.5 mM hypochlorite, pH 4.5, and a reaction time of 10 min at room temperature) were established by the variation of parameters. Cyanide (C(13)N(15)) was chosen as an internal standard. The formed β-thiocyanoalanine was directly injected into a liquid chromatography-tandem mass spectrometer without any additional extraction or purification procedures. Under the established conditions, the limits of detection and the limits of quantification were 0.07 and 0.2 μg/L, respectively, and the interday relative standard deviation was less than 4% at concentrations of 4.0, 20.0, and 100.0 μg/L. The method was successfully applied to determine CN(-) in chlorinated water samples. The detected concentration range and detection frequency of CN(-) were 0.20-8.42 μg/L (14/24) in source drinking water and 0.21-1.03 μg/L (18/24) in chlorinated drinking water.

  9. Evaluation, including effects of storage and repeated freezing and thawing, of a method for measurement of urinary creatinine

    DEFF Research Database (Denmark)

    Garde, A H; Hansen, Åse Marie; Kristiansen, J

    2003-01-01

    The aims of this study were to elucidate to what extent storage and repeated freezing and thawing influenced the concentration of creatinine in urine samples and to evaluate the method for determination of creatinine in urine. The creatinine method was based on the well-known Jaffe's reaction...... and measured on a COBAS Mira autoanalyser from Roche. The main findings were that samples for analysis of creatinine should be kept at a temperature of -20 degrees C or lower and frozen and thawed only once. The limit of detection, determined as 3 x SD of 20 determinations of a sample at a low concentration (6...

  10. 3D-QSPR Method of Computational Technique Applied on Red Reactive Dyes by Using CoMFA Strategy

    Directory of Open Access Journals (Sweden)

    Shahnaz Perveen

    2011-12-01

    Full Text Available Cellulose fiber is a tremendous natural resource that has broad application in various productions including the textile industry. The dyes, which are commonly used for cellulose printing, are “reactive dyes” because of their high wet fastness and brilliant colors. The interaction of various dyes with the cellulose fiber depends upon the physiochemical properties that are governed by specific features of the dye molecule. The binding pattern of the reactive dye with cellulose fiber is called the ligand-receptor concept. In the current study, the three dimensional quantitative structure property relationship (3D-QSPR technique was applied to understand the red reactive dyes interactions with the cellulose by the Comparative Molecular Field Analysis (CoMFA method. This method was successfully utilized to predict a reliable model. The predicted model gives satisfactory statistical results and in the light of these, it was further analyzed. Additionally, the graphical outcomes (contour maps help us to understand the modification pattern and to correlate the structural changes with respect to the absorptivity. Furthermore, the final selected model has potential to assist in understanding the charachteristics of the external test set. The study could be helpful to design new reactive dyes with better affinity and selectivity for the cellulose fiber.

  11. 3D-QSPR method of computational technique applied on red reactive dyes by using CoMFA strategy.

    Science.gov (United States)

    Mahmood, Uzma; Rashid, Sitara; Ali, S Ishrat; Parveen, Rasheeda; Zaheer-Ul-Haq; Ambreen, Nida; Khan, Khalid Mohammed; Perveen, Shahnaz; Voelter, Wolfgang

    2011-01-01

    Cellulose fiber is a tremendous natural resource that has broad application in various productions including the textile industry. The dyes, which are commonly used for cellulose printing, are "reactive dyes" because of their high wet fastness and brilliant colors. The interaction of various dyes with the cellulose fiber depends upon the physiochemical properties that are governed by specific features of the dye molecule. The binding pattern of the reactive dye with cellulose fiber is called the ligand-receptor concept. In the current study, the three dimensional quantitative structure property relationship (3D-QSPR) technique was applied to understand the red reactive dyes interactions with the cellulose by the Comparative Molecular Field Analysis (CoMFA) method. This method was successfully utilized to predict a reliable model. The predicted model gives satisfactory statistical results and in the light of these, it was further analyzed. Additionally, the graphical outcomes (contour maps) help us to understand the modification pattern and to correlate the structural changes with respect to the absorptivity. Furthermore, the final selected model has potential to assist in understanding the characteristics of the external test set. The study could be helpful to design new reactive dyes with better affinity and selectivity for the cellulose fiber.

  12. Twilight of dawn or of evening? A century of research methods in the Journal of Applied Psychology.

    Science.gov (United States)

    Cortina, Jose M; Aguinis, Herman; DeShon, Richard P

    2017-03-01

    We offer a critical review and synthesis of research methods in the first century of the Journal of Applied Psychology. We divide the chronology into 6 periods. The first emphasizes the first few issues of the journal, which, in many ways, set us on a methodological course that we sail to this day, and then takes us through the mid-1920s. The second is the period through World War II, in which we see the roots of modern methodological concepts and techniques, including a transition from a discovery orientation to a hypotheticodeductive model orientation. The third takes us through roughly 1970, a period in which many of our modern-day practices were formed, such as reliance on null hypothesis significance testing. The fourth, from 1970 through 1989, sees an emphasis on the development of measures of critical constructs. The fifth takes us into the present, which is marked by greater plurality regarding data-analytic approaches. Finally, we offer a glimpse of possible and, from our perspective, desirable futures regarding research methods. Specifically, we highlight the need to conduct replications; study the exceptional and not just the average; improve the quality of the review process, particularly regarding methodological issues; emphasize design and measurement issues; and build and test more specific theories. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Theory of linear physical systems theory of physical systems from the viewpoint of classical dynamics, including Fourier methods

    CERN Document Server

    Guillemin, Ernst A

    2013-01-01

    An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.

  14. Sighting optics including an optical element having a first focal length and a second focal length and methods for sighting

    Science.gov (United States)

    Crandall, David Lynn

    2011-08-16

    Sighting optics include a front sight and a rear sight positioned in a spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus for a user images of the front sight and the target.

  15. Compositions of graphene materials with metal nanostructures and microstructures and methods of making and using including pressure sensors

    KAUST Repository

    Chen, Ye

    2017-01-26

    Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets. The composition has high conductivity and flexibility. The composition can be made by a one-pot synthesis in which a graphene material precursor is converted to the graphene material, and the metal precursor is converted to the metal. A reducing solvent or dispersant such as NMP can be used. Devices made from the composition include a pressure sensor which has high sensitivity. Two two- dimension materials can be combined to form a hybrid material.

  16. Difference in target definition using three different methods to include respiratory motion in radiotherapy of lung cancer

    DEFF Research Database (Denmark)

    Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe

    2017-01-01

    : PTVσ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTVdel) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger...

  17. A global method for calculating plant CSR ecological strategies applied across biomes world-wide

    NARCIS (Netherlands)

    Pierce, S.; Negreiros, D.; Cerabolini, B.E.L.; Kattge, J.; Díaz, S.; Kleyer, M.; Shipley, B.; Wright, S.J.; Soudzilovskaia, N.A.; Onipchenko, V.G.; van Bodegom, P.M.; Frenette-Dussault, C.; Weiher, E.; Pinho, B.X.; Cornelissen, J.H.C.; Grime, J.P.; Thompson, K.; Hunt, R.; Wilson, P.J.; Buffa, G.; Nyakunga, O.C.; Reich, P.B.; Caccianiga, M.; Mangili, F.; Ceriani, R.M.; Luzzaro, A.; Brusa, G.; Siefert, A.; Barbosa, N.P.U.; Chapin III, F.S.; Cornwell, W.K.; Fang, Jingyun; Wilson Fernandez, G.; Garnier, E.; Le Stradic, S.; Peñuelas, J.; Melo, F.P.L.; Slaviero, A.; Tabarrelli, M.; Tampucci, D.

    2017-01-01

    Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA)

  18. Benthic microalgal production in the Arctic: Applied methods and status of the current database

    DEFF Research Database (Denmark)

    Glud, Ronnie Nøhr; Woelfel, Jana; Karsten, Ulf

    2009-01-01

    The current database on benthic microalgal production in Arctic waters comprises 10 peer-reviewed and three unpublished studies. Here, we compile and discuss these datasets, along with the applied measurement approaches used. The latter is essential for robust comparative analysis and to clarify ...

  19. Applying Item Response Theory Methods to Examine the Impact of Different Response Formats

    Science.gov (United States)

    Hohensinn, Christine; Kubinger, Klaus D.

    2011-01-01

    In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…

  20. Applying Item Response Theory methods to design a learning progression-based science assessment

    Science.gov (United States)

    Chen, Jing

    Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all

  1. An efficient method to find potentially universal population genetic markers, applied to metazoans

    Directory of Open Access Journals (Sweden)

    Chenuil Anne

    2010-09-01

    Full Text Available Abstract Background Despite the impressive growth of sequence databases, the limited availability of nuclear markers that are sufficiently polymorphic for population genetics and phylogeography and applicable across various phyla restricts many potential studies, particularly in non-model organisms. Numerous introns have invariant positions among kingdoms, providing a potential source for such markers. Unfortunately, most of the few known EPIC (Exon Primed Intron Crossing loci are restricted to vertebrates or belong to multigenic families. Results In order to develop markers with broad applicability, we designed a bioinformatic approach aimed at avoiding multigenic families while identifying intron positions conserved across metazoan phyla. We developed a program facilitating the identification of EPIC loci which allowed slight variation in intron position. From the Homolens databases we selected 29 gene families which contained 52 promising introns for which we designed 93 primer pairs. PCR tests were performed on several ascidians, echinoderms, bivalves and cnidarians. On average, 24 different introns per genus were amplified in bilaterians. Remarkably, five of the introns successfully amplified in all of the metazoan genera tested (a dozen genera, including cnidarians. The influence of several factors on amplification success was investigated. Success rate was not related to the phylogenetic relatedness of a taxon to the groups that most influenced primer design, showing that these EPIC markers are extremely conserved in animals. Conclusions Our new method now makes it possible to (i rapidly isolate a set of EPIC markers for any phylum, even outside the animal kingdom, and thus, (ii compare genetic diversity at potentially homologous polymorphic loci between divergent taxa.

  2. Consensus in controversy: The modified Delphi method applied to Gynecologic Oncology practice.

    Science.gov (United States)

    Cohn, David E; Havrilesky, Laura J; Osann, Kathryn; Lipscomb, Joseph; Hsieh, Susie; Walker, Joan L; Wright, Alexi A; Alvarez, Ronald D; Karlan, Beth Y; Bristow, Robert E; DiSilvestro, Paul A; Wakabayashi, Mark T; Morgan, Robert; Mukamel, Dana B; Wenzel, Lari

    2015-09-01

    To determine the degree of consensus regarding the probabilities of outcomes associated with IP/IV and IV chemotherapy. A survey was administered to an expert panel using the Delphi method. Ten ovarian cancer experts were asked to estimate outcomes for patients receiving IP/IV or IV chemotherapy. The clinical estimates were: 1) probability of completing six cycles of chemotherapy, 2) probability of surviving five years, 3) median survival, and 4) probability of ER/hospital visits during treatment. Estimates for two patients, one with a low comorbidity index (patient 1) and the other with a moderate index (patient 2), were included. The survey was administered in three rounds, and panelists could revise their subsequent responses based on review of the anonymous opinions of their peers. The ranges were smaller for IV compared with IP/IV therapy. Ranges decreased with each round. Consensus converged around outcomes related to IP/IV chemotherapy for: 1) completion of 6 cycles of therapy (type 1 patient, 62%, type 2 patient, 43%); 2) percentage of patients surviving 5 years (type 1 patient, 66%, type 2 patient, 47%); and 3) median survival (type 1 patient, 83 months, type 2 patient, 58 months). The group required three rounds to achieve consensus on the probabilities of ER/hospital visits (type 1 patient, 24%, type 2 patient, 35%). Initial estimates of survival and adverse events associated with IP/IV chemotherapy differ among experts. The Delphi process works to build consensus and may be a pragmatic tool to inform patients of their expected outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Intelligent error correction method applied on an active pixel sensor based star tracker

    Science.gov (United States)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like

  4. Further development of LLNA:DAE method as stand-alone skin-sensitization testing method and applied for evaluation of relative skin-sensitizing potency between chemicals.

    Science.gov (United States)

    Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi

    2015-04-01

    To date, there has been no well-established local lymph node assay (LLNA) that includes an elicitation phase. Therefore, we developed a modified local lymph node assay with an elicitation phase (LLNA:DAE) to discriminate true skin sensitizers from chemicals that gave borderline positive results and previously reported this assay. To develop the LLNA:DAE method as a useful stand-alone testing method, we investigated the complete procedure for the LLNA:DAE method using hexyl cinnamic aldehyde (HCA), isoeugenol, and 2,4-dinitrochlorobenzene (DNCB) as test compounds. We defined the LLNA:DAE procedure as follows: in the dose-finding test, four concentrations of chemical applied to dorsum of the right ear on days 1, 2, and 3 and dorsum of both ears on day 10. Ear thickness and skin irritation score were measured on days 1, 3, 5, 10, and 12. Local lymph nodes were excised and weighed on day 12. The test dose for the primary LLNA:DAE study was selected as the dose that gave the highest left ear lymph node weight in the dose-finding study, or the lowest dose that produced a left ear lymph node of over 4 mg. This procedure was validated using nine different chemicals. Furthermore, qualitative relationship was observed between the degree of elicitation response in the left ear lymph node and the skin sensitizing potency of 32 chemicals tested in this study and the previous study. These results indicated that LLNA:DAE method was as first LLNA method that was able to evaluate the skin sensitizing potential and potency in elicitation response.

  5. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  6. Evaluation of Two Fitting Methods Applied for Thin-Layer Drying of Cape Gooseberry Fruits

    Directory of Open Access Journals (Sweden)

    Erkan Karacabey

    Full Text Available ABSTRACT Drying data of cape gooseberry was used to compare two fitting methods: namely 2-step and 1-step methods. Literature data was also used to confirm the results. To demonstrate the applicability of these methods, two primary models (Page, Two-term-exponential were selected. Linear equation was used as secondary model. As well-known from the previous modelling studies on drying, 2-step method required at least two regressions: One is primary model and one is secondary (if you have only one environmental condition such as temperature. On the other hand, one regression was enough for 1-step method. Although previous studies on kinetic modelling of drying of foods were based on 2-step method, this study indicated that 1-step method may also be a good alternative with some advantages such as drawing an informative figure and reducing time of calculations.

  7. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  8. Shining a light on LAMP assays--a comparison of LAMP visualization methods including the novel use of berberine.

    Science.gov (United States)

    Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix

    2015-04-01

    The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.

  9. Measure Guideline: Summary of Interior Ducts in New Construction, Including an Efficient, Affordable Method to Install Fur-Down Interior Ducts

    Energy Technology Data Exchange (ETDEWEB)

    Beal, D. [BA-PIRC, Cocoa, FL (United States); McIlvaine, J. [BA-PIRC, Cocoa, FL (United States); Fonorow, K. [BA-PIRC, Cocoa, FL (United States); Martin, E. [BA-PIRC, Cocoa, FL (United States)

    2011-11-01

    This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces.

  10. A Quadrature Method of Moments for Polydisperse Flow in Bubble Columns Including Poly-Celerity, Breakup and Coalescence

    Directory of Open Access Journals (Sweden)

    Thomas Acher

    2014-12-01

    Full Text Available A simulation model for 3D polydisperse bubble column flows in an Eulerian/Eulerian framework is presented. A computationally efficient and numerically stable algorithm is created by making use of quadrature method of moments (QMOM functionalities, in conjunction with appropriate breakup and coalescence models. To account for size dependent bubble motion, the constituent moments of the bubble size distribution function are transported with individual velocities. Validation of the simulation results against experimental and numerical data of Hansen [1] show the capability of the present model to accurately predict complex gas-liquid flows.

  11. Geochronology and geochemistry by nuclear tracks method: some utilization examples in geologic applied

    International Nuclear Information System (INIS)

    Poupeau, G.; Soliani Junior, E.

    1988-01-01

    This article discuss some applications of the 'nuclear tracks method' in geochronology, geochemistry and geophysic. In geochronology, after rapid presentation of the dating principles by 'Fission Track' and the kinds of geological events mensurable by this method, is showed some application in metallogeny and in petroleum geolocy. In geochemistry the 'fission tracks' method utilizations are related with mining prospecting and uranium prospecting. In geophysics an important application is the earthquake prevision, through the Ra 222 emanations continous control. (author) [pt

  12. Purists need not apply: the case for pragmatism in mixed methods research.

    Science.gov (United States)

    Florczak, Kristine L

    2014-10-01

    The purpose of this column is to describe several different ways of conducting mixed method research. The paradigms that underpin both qualitative and quantitative research are also considered along with a cursory review of classical pragmatism as it relates conducting mixed methods studies. Finally, the idea of loosely coupled systems as a means to support mixed methods studies is proposed along with several caveats to researchers who desire to use this new way of obtaining knowledge. © The Author(s) 2014.

  13. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  14. Applying formal method to design of nuclear power plant embedded protection system

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Kim, Il Gon; Sung, Chang Hoon; Choi, Jin Young; Lee, Na Young

    2001-01-01

    Nuclear power embedded protection systems is a typical safety-critical system, which detects its failure and shutdowns its operation of nuclear reactor. These systems are very dangerous so that it absolutely requires safety and reliability. Therefore nuclear power embedded protection system should fulfill verification and validation completely from the design stage. To develop embedded system, various V and V method have been provided and especially its design using Formal Method is studied in other advanced country. In this paper, we introduce design method of nuclear power embedded protection systems using various Formal-Method in various respect following nuclear power plant software development guideline

  15. Researching and applying the MRSS method in fuel assembly mechanical design

    International Nuclear Information System (INIS)

    Li Jiwei; Zhou Yunqing; Liu Jiazheng; Tong Xing; Zheng Yixiong

    2014-01-01

    Tolerance analysis is an important part in mechanical design of fuel assemblies. With introduction of the MRSS method and the process capability, the relation between the two was discussed. The conditions of the MRSS method were limited by calculating the protrusion of the Outer Strap Spring of the Grid. The results show that the MRSS method shall be preferred used in linear tolerance analysis in fuel assemblies with numbers of dimensions by controlling process capability and considering sensitivities and modified factor. The results can be approbated both by designers and manufacturers with the MRSS method used. (authors)

  16. Reliability analysis of reactor systems by applying probability method; Analiza pouzdanosti reaktorskih sistema primenom metoda verovatnoce

    Energy Technology Data Exchange (ETDEWEB)

    Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1974-12-15

    Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.

  17. An Analysis of Methods Section of Research Reports in Applied Linguistics

    OpenAIRE

    Patrícia Marcuzzo

    2011-01-01

    This work aims at identifying analytical categories and research procedures adopted in the analysis of research article in Applied Linguistics/EAP in order to propose a systematization of the research procedures in Genre Analysis. For that purpose, 12 research reports and interviews with four authors were analyzed. The analysis showed that the studies are concentrated on the investigation of the macrostructure or on the microstructure of research articles in different fields. Studies about th...

  18. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    Science.gov (United States)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  19. Innovative Methods for Estimating Densities and Detection Probabilities of Secretive Reptiles Including Invasive Constrictors and Rare Upland Snakes

    Science.gov (United States)

    2018-01-30

    home range  maintenance  or attraction to or avoidance of  landscape features, including  roads  (Morales et al. 2004, McClintock et al. 2012). For example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial

  20. Methods of the professional-applied physical preparation of students of higher educational establishments of economic type

    Directory of Open Access Journals (Sweden)

    Maliar E.I.

    2010-11-01

    Full Text Available Is considered the directions of professionally-applied physical preparation of students with the prevailing use of facilities of football. Are presented the methods of professionally-applied physical preparation of students. It is indicated that application of method of the circular training is rendered by an assistance development of discipline, honesty, honesty, rational use of time. Underline, that in teaching it is necessary to provide a short cut to mastering of the planned knowledge, abilities and skills, improvement of physical qualities.