WorldWideScience

Sample records for arctic applied methods

  1. Benthic microalgal production in the Arctic: Applied methods and status of the current database

    DEFF Research Database (Denmark)

    Glud, Ronnie Nøhr; Woelfel, Jana; Karsten, Ulf

    2009-01-01

    The current database on benthic microalgal production in Arctic waters comprises 10 peer-reviewed and three unpublished studies. Here, we compile and discuss these datasets, along with the applied measurement approaches used. The latter is essential for robust comparative analysis and to clarify ...

  2. Participatory Methods in Arctic Research

    DEFF Research Database (Denmark)

    Faber, Louise

    2018-01-01

    collection, analysis and conclusions and / or knowledge dissemination. The book aims to collect and share experiences from researchers active in engaging research in the Arctic. The articles reflect on the inclusive methods used in the Arctic research, on the cause and purpose thereof, while the methods......This book is a collection of articles written by researchers at Aalborg University, affiliated with AAU Arctic. The articles are about how the researchers in their respective projects work with stakeholders and citizens in different ways, for example in connection with problem formulation, data...... are exemplified to serve as inspiration for other researchers....

  3. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  4. Games in the Arctic: applying game theory insights to Arctic challenges

    Directory of Open Access Journals (Sweden)

    Scott Cole

    2014-08-01

    Full Text Available We illustrate the benefits of game theoretic analysis for assisting decision-makers in resolving conflicts and other challenges in a rapidly evolving region. We review a series of salient Arctic issues with global implications—managing open-access fisheries, opening Arctic areas for resource extraction and ensuring effective environmental regulation for natural resource extraction—and provide insights to help reach socially preferred outcomes. We provide an overview of game theoretic analysis in layman's terms, explaining how game theory can help researchers and decision-makers to better understand conflicts, and how to identify the need for, and improve the design of, policy interventions. We believe that game theoretic tools are particularly useful in a region with a diverse set of players ranging from countries to firms to individuals. We argue that the Arctic Council should take a more active governing role in the region by, for example, dispersing information to “players” in order to alleviate conflicts regarding the management of common-pool resources such as open-access fisheries and natural resource extraction. We also identify side payments—that is, monetary or in-kind compensation from one party of a conflict to another—as a key mechanism for reaching a more biologically, culturally and economically sustainable Arctic future. By emphasizing the practical insights generated from an academic discipline, we present game theory as an influential tool in shaping the future of the Arctic—for individual researchers, for inter-disciplinary research and for policy-makers themselves.

  5. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  6. Methods of applied mathematics

    CERN Document Server

    Hildebrand, Francis B

    1992-01-01

    This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.

  7. Methods for measuring arctic and alpine shrub growth

    DEFF Research Database (Denmark)

    Myers-Smith, Isla; Hallinger, Martin; Blok, Daan

    2015-01-01

    Shrubs have increased in abundance and dominance in arctic and alpine regions in recent decades. This often dramatic change, likely due to climate warming, has the potential to alter both the structure and function of tundra ecosystems. The analysis of shrub growth is improving our understanding...... of tundra vegetation dynamics and environmental changes. However, dendrochronological methods developed for trees, need to be adapted for the morphology and growth eccentricity of shrubs. Here, we review current and developing methods to measure radial and axial growth, estimate age, and assess growth...... dynamics in relation to environmental variables. Recent advances in sampling methods, analysis and applications have improved our ability to investigate growth and recruitment dynamics of shrubs. However, to extrapolate findings to the biome scale, future dendroecologicalwork will require improved...

  8. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....

  9. Methods for measuring arctic and alpine shrub growth: A review

    NARCIS (Netherlands)

    Myers-Smith, I.H.; Hallinger, M.; Blok, D.; Sass-Klaassen, U.G.W.; Rayback, S.A.

    2015-01-01

    Shrubs have increased in abundance and dominance in arctic and alpine regions in recent decades. This often dramatic change, likely due to climate warming, has the potential to alter both the structure and function of tundra ecosystems. The analysis of shrub growth is improving our understanding of

  10. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays...... bounded model-checking and satisfiability modulo theories (SMT) solvers can be used to check these criteria. Voter Experience: Technology profoundly affects the voter experience. These effects need to be measured and the data should be used to make decisions regarding the implementation of the electoral...... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...

  11. Arctic Risk Management (ARMNet) Network: Linking Risk Management Practitioners and Researchers Across the Arctic Regions of Canada and Alaska To Improve Risk, Emergency and Disaster Preparedness and Mitigation Through Comparative Analysis and Applied Research

    Science.gov (United States)

    Garland, A.

    2015-12-01

    The Arctic Risk Management Network (ARMNet) was conceived as a trans-disciplinary hub to encourage and facilitate greater cooperation, communication and exchange among American and Canadian academics and practitioners actively engaged in the research, management and mitigation of risks, emergencies and disasters in the Arctic regions. Its aim is to assist regional decision-makers through the sharing of applied research and best practices and to support greater inter-operability and bilateral collaboration through improved networking, joint exercises, workshops, teleconferences, radio programs, and virtual communications (eg. webinars). Most importantly, ARMNet is a clearinghouse for all information related to the management of the frequent hazards of Arctic climate and geography in North America, including new and emerging challenges arising from climate change, increased maritime polar traffic and expanding economic development in the region. ARMNet is an outcome of the Arctic Observing Network (AON) for Long Term Observations, Governance, and Management Discussions, www.arcus.org/search-program. The AON goals continue with CRIOS (www.ariesnonprofit.com/ARIESprojects.php) and coastal erosion research (www.ariesnonprofit.com/webinarCoastalErosion.php) led by the North Slope Borough Risk Management Office with assistance from ARIES (Applied Research in Environmental Sciences Nonprofit, Inc.). The constituency for ARMNet will include all northern academics and researchers, Arctic-based corporations, First Responders (FRs), Emergency Management Offices (EMOs) and Risk Management Offices (RMOs), military, Coast Guard, northern police forces, Search and Rescue (SAR) associations, boroughs, territories and communities throughout the Arctic. This presentation will be of interest to all those engaged in Arctic affairs, describe the genesis of ARMNet and present the results of stakeholder meetings and webinars designed to guide the next stages of the Project.

  12. [Montessori method applied to dementia - literature review].

    Science.gov (United States)

    Brandão, Daniela Filipa Soares; Martín, José Ignacio

    2012-06-01

    The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.

  13. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  14. Applied mathematical methods in nuclear thermal hydraulics

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1983-01-01

    Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated

  15. Entropy viscosity method applied to Euler equations

    International Nuclear Information System (INIS)

    Delchini, M. O.; Ragusa, J. C.; Berry, R. A.

    2013-01-01

    The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)

  16. Analytical methods applied to water pollution

    International Nuclear Information System (INIS)

    Baudin, G.

    1977-01-01

    A comparison of different methods applied to water analysis is given. The discussion is limited to the problems presented by inorganic elements, accessible to nuclear activation analysis methods. The following methods were compared: activation analysis: with gamma-ray spectrometry, atomic absorption spectrometry, fluorimetry, emission spectrometry, colorimetry or spectrophotometry, X-ray fluorescence, mass spectrometry, voltametry, polarography or other electrochemical methods, activation analysis-beta measurements. Drinking-water, irrigation waters, sea waters, industrial wastes and very pure waters are the subjects of the investigations. The comparative evaluation is made on the basis of storage of samples, in situ analysis, treatment and concentration, specificity and interference, monoelement or multielement analysis, analysis time and accuracy. The significance of the neutron analysis is shown. (T.G.)

  17. Monitoring Freeze Thaw Transitions in Arctic Soils using Complex Resistivity Method

    Science.gov (United States)

    Wu, Y.; Hubbard, S. S.; Ulrich, C.; Dafflon, B.; Wullschleger, S. D.

    2012-12-01

    The Arctic region, which is a sensitive system that has emerged as a focal point for climate change studies, is characterized by a large amount of stored carbon and a rapidly changing landscape. Seasonal freeze-thaw transitions in the Arctic alter subsurface biogeochemical processes that control greenhouse gas fluxes from the subsurface. Our ability to monitor freeze thaw cycles and associated biogeochemical transformations is critical to the development of process rich ecosystem models, which are in turn important for gaining a predictive understanding of Arctic terrestrial system evolution and feedbacks with climate. In this study, we conducted both laboratory and field investigations to explore the use of the complex resistivity method to monitor freeze thaw transitions of arctic soil in Barrow, AK. In the lab studies, freeze thaw transitions were induced on soil samples having different average carbon content through exposing the arctic soil to temperature controlled environments at +4 oC and -20 oC. Complex resistivity and temperature measurements were collected using electrical and temperature sensors installed along the soil columns. During the laboratory experiments, resistivity gradually changed over two orders of magnitude as the temperature was increased or decreased between -20 oC and 0 oC. Electrical phase responses at 1 Hz showed a dramatic and immediate response to the onset of freeze and thaw. Unlike the resistivity response, the phase response was found to be exclusively related to unfrozen water in the soil matrix, suggesting that this geophysical attribute can be used as a proxy for the monitoring of the onset and progression of the freeze-thaw transitions. Spectral electrical responses contained additional information about the controls of soil grain size distribution on the freeze thaw dynamics. Based on the demonstrated sensitivity of complex resistivity signals to the freeze thaw transitions, field complex resistivity data were collected over

  18. Spherical Slepian as a new method for ionospheric modeling in arctic region

    Science.gov (United States)

    Etemadfard, Hossein; Hossainali, Masoud Mashhadi

    2016-03-01

    From the perspective of the physical, chemical and biological balance in the world, the Arctic has gradually turned into an important region opening ways for new researchers and scientific expeditions. In other words, various researches have been funded in order to study this frozen frontier in details. The current study can be seen in the same milieu where researchers intend to propose a set of new base functions for modeling ionospheric in the Arctic. As such, to optimize the Spherical Harmonic (SH) functions, the spatio-spectral concentration is applied here using the Slepian theory that was developed by Simons. For modeling the ionosphere, six International GNSS Service (IGS) stations located in the northern polar region were taken into account. Two other stations were left out for assessing the accuracy of the proposed model. The adopted GPS data starts at DOY 69 (Day of Year) and ends at DOY 83 (totally 15 successive days) in 2013. Three Spherical Slepian models respectively with the maximal degrees of K=15, 20 & 25 were used. Based on the results, K=15 is the optimum degree for the proposed model. The accuracy and precision of the Slepian model are about 0.1 and 0.05 TECU, respectively (TEC Unit=1016 electron/m2). To understand the advantage of this model, it is compared with polynomial and trigonometric series which are developed using the same set of measurements. The accuracy and precision of trigonometric and polynomial models are at least 4 times worse than the Slepian one.

  19. Arctic bioremediation

    International Nuclear Information System (INIS)

    Lidell, B.V.; Smallbeck, D.R.; Ramert, P.C.

    1991-01-01

    Cleanup of oil and diesel spills on gravel pads in the Arctic has typically been accomplished by utilizing a water flushing technique to remove the gross contamination or excavating the spill area and placing the material into a lined pit, or a combination of both. Enhancing the biological degradation of hydrocarbon (bioremediation) by adding nutrients to the spill area has been demonstrated to be an effective cleanup tool in more temperate locations. However, this technique has never been considered for restoration in the Arctic because the process of microbial degradation of hydrocarbon in this area is very slow. The short growing season and apparent lack of nutrients in the gravel pads were thought to be detrimental to using bioremediation to cleanup Arctic oil spills. This paper discusses the potential to utilize bioremediation as an effective method to clean up hydrocarbon spills in the northern latitudes

  20. Arctic bioremediation

    International Nuclear Information System (INIS)

    Liddell, B.V.; Smallbeck, D.R.; Ramert, P.C.

    1991-01-01

    Cleanup of oil and diesel spills on gravel pads in the Arctic has typically been accomplished by utilizing a water flushing technique to remove the gross contamination or excavating the spill area and placing the material into a lined pit, or a combination of both. This paper discusses the potential to utilize bioremediation as an effective method to clean up hydrocarbon spills in the northern latitudes. Discussed are the results of a laboratory bioremediation study which simulated microbial degradation of hydrocarbon under arctic conditions

  1. Applied Mathematical Methods in Theoretical Physics

    Science.gov (United States)

    Masujima, Michio

    2005-04-01

    All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.

  2. Halogen determination in Arctic aerosols by neutron activation analysis with Compton suppression methods

    International Nuclear Information System (INIS)

    Landsberger, S.; Basunia, M.S.; Iskander, F.

    2001-01-01

    The study of halogens particularly bromine and chlorine in Arctic aerosols has received a great deal of attention in the past decade in ozone depletion during polar sunrise studies. Iodine has also been studied as part of geochemical cycling. It was shown that all three of the above elements can be determined simultaneously with very low detection limits using epithermal NAA in conjunction with Compton suppression methods. Besides lowering the background considerably, Compton suppression can eliminate or minimize the overlapping peak of the 620 keV photopeak arising form the 1642 keV double escape peak of 38 Cl interfering with the 616.9 keV photopeak of 79 Br(n,γ) 80 Br reaction. Iodine is ideally determined by epithermal NAA because of its very good resonance integral cross-section. Although chlorine is usually determined using thermal neutrons via the 37 Cl(n,γ) 38 Cl reactions, epithermal NAA is still feasible for the Arctic aerosol, since it has a major sea-salt component. (author)

  3. Applying scrum methods to ITS projects.

    Science.gov (United States)

    2017-08-01

    The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...

  4. Applying Fuzzy Possibilistic Methods on Critical Objects

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2016-01-01

    Providing a flexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a specific cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...... methods to provide a proper searching space for data objects. The membership functions used by each method when dealing with critical objects is also evaluated. Our results show that relaxing the conditions of participation for data objects in as many partitions as they can, is beneficial....

  5. Quality assurance and applied statistics. Method 3

    International Nuclear Information System (INIS)

    1992-01-01

    This German-Industry-Standards-paperback contains the International Standards from the Series ISO 9000 (or, as the case may be, the European Standards from the Series EN 29000) concerning quality assurance and including the already completed supplementary guidelines with ISO 9000- and ISO 9004-section numbers, which have been adopted as German Industry Standards and which are observed and applied world-wide to a great extent. It also includes the German-Industry-Standards ISO 10011 parts 1, 2 and 3 concerning the auditing of quality-assurance systems and the German-Industry-Standard ISO 10012 part 1 concerning quality-assurance demands (confirmation system) for measuring devices. The standards also include English and French versions. They are applicable independent of the user's line of industry and thus constitute basic standards. (orig.) [de

  6. Lavine method applied to three body problems

    International Nuclear Information System (INIS)

    Mourre, Eric.

    1975-09-01

    The methods presently proposed for the three body problem in quantum mechanics, using the Faddeev approach for proving the asymptotic completeness, come up against the presence of new singularities when the potentials considered v(α)(x(α)) for two-particle interactions decay less rapidly than /x(α)/ -2 ; and also when trials are made for solving the problem with a representation space whose dimension for a particle is lower than three. A method is given that allows the mathematical approach to be extended to three body problem, in spite of singularities. Applications are given [fr

  7. Design and methods in a survey of living conditions in the Arctic - the SLiCA study.

    Science.gov (United States)

    Eliassen, Bent-Martin; Melhus, Marita; Kruse, Jack; Poppel, Birger; Broderstad, Ann Ragnhild

    2012-03-19

    The main objective of this study is to describe the methods and design of the survey of living conditions in the Arctic (SLiCA), relevant participation rates and the distribution of participants, as applicable to the survey data in Alaska, Greenland and Norway. This article briefly addresses possible selection bias in the data and also the ways to tackle it in future studies. Population-based cross-sectional survey. Indigenous individuals aged 16 years and older, living in Greenland, Alaska and in traditional settlement areas in Norway, were invited to participate. Random sampling methods were applied in Alaska and Greenland, while non-probability sampling methods were applied in Norway. Data were collected in 3 periods: in Alaska, from January 2002 to February 2003; in Greenland, from December 2003 to August 2006; and in Norway, in 2003 and from June 2006 to June 2008. The principal method in SLiCA was standardised face-to-face interviews using a questionnaire. A total of 663, 1,197 and 445 individuals were interviewed in Alaska, Greenland and Norway, respectively. Very high overall participation rates of 83% were obtained in Greenland and Alaska, while a more conventional rate of 57% was achieved in Norway. A predominance of female respondents was obtained in Alaska. Overall, the Sami cohort is older than the cohorts from Greenland and Alaska. Preliminary assessments suggest that selection bias in the Sami sample is plausible but not a major threat. Few or no threats to validity are detected in the data from Alaska and Greenland. Despite different sampling and recruitment methods, and sociocultural differences, a unique database has been generated, which shall be used to explore relationships between health and other living conditions variables.

  8. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  9. Applying Mixed Methods Techniques in Strategic Planning

    Science.gov (United States)

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  10. [The diagnostic methods applied in mycology].

    Science.gov (United States)

    Kurnatowska, Alicja; Kurnatowski, Piotr

    2008-01-01

    The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.

  11. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  12. Proteomics methods applied to malaria: Plasmodium falciparum

    International Nuclear Information System (INIS)

    Cuesta Astroz, Yesid; Segura Latorre, Cesar

    2012-01-01

    Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.

  13. METHOD OF APPLYING NICKEL COATINGS ON URANIUM

    Science.gov (United States)

    Gray, A.G.

    1959-07-14

    A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.

  14. Versatile Formal Methods Applied to Quantum Information.

    Energy Technology Data Exchange (ETDEWEB)

    Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.

  15. Optimization methods applied to hybrid vehicle design

    Science.gov (United States)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  16. Applying the Socratic Method to Physics Education

    Science.gov (United States)

    Corcoran, Ed

    2005-04-01

    We have restructured University Physics I and II in accordance with methods that PER has shown to be effective, including a more interactive discussion- and activity-based curriculum based on the premise that developing understanding requires an interactive process in which students have the opportunity to talk through and think through ideas with both other students and the teacher. Studies have shown that in classes implementing this approach to teaching as compared to classes using a traditional approach, students have significantly higher gains on the Force Concept Inventory (FCI). This has been true in UPI. However, UPI FCI results seem to suggest that there is a significant conceptual hole in students' understanding of Newton's Second Law. Two labs in UPI which teach Newton's Second Law will be redesigned replacing more activity with students as a group talking through, thinking through, and answering conceptual questions asked by the TA. The results will be measured by comparing FCI results to those from previous semesters, coupled with interviews. The results will be analyzed, and we will attempt to understand why gains were or were not made.

  17. Scanning probe methods applied to molecular electronics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlicek, Niko

    2013-08-01

    Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.

  18. Applying High Resolution Imagery to Understand the Role of Dynamics in the Diminishing Arctic Sea Ice Cover

    Science.gov (United States)

    2015-09-30

    describe contemporary ice pack thickness, MODIS , AVHRR, RadarSat-2 (satellite imagery) that describe ice pack deformation features on large scales, as well...high-resolution visible-band images of the Arctic ice pack that are available at the GFL, USGS. The statistics related to the available images are...University of Maryland team as a Faculty Research Assistant, working under the guidance of Co-PI Farrell. Ms. Faber is responsible for analysis of MODIS

  19. Reflections on Mixing Methods in Applied Linguistics Research

    Science.gov (United States)

    Hashemi, Mohammad R.

    2012-01-01

    This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…

  20. Applying homotopy analysis method for solving differential-difference equation

    International Nuclear Information System (INIS)

    Wang Zhen; Zou Li; Zhang Hongqing

    2007-01-01

    In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations

  1. Limited dietary overlap amongst resident Arctic herbivores in winter: complementary insights from complementary methods.

    Science.gov (United States)

    Schmidt, Niels M; Mosbacher, Jesper B; Vesterinen, Eero J; Roslin, Tomas; Michelsen, Anders

    2018-04-26

    Snow may prevent Arctic herbivores from accessing their forage in winter, forcing them to aggregate in the few patches with limited snow. In High Arctic Greenland, Arctic hare and rock ptarmigan often forage in muskox feeding craters. We therefore hypothesized that due to limited availability of forage, the dietary niches of these resident herbivores overlap considerably, and that the overlap increases as winter progresses. To test this, we analyzed fecal samples collected in early and late winter. We used molecular analysis to identify the plant taxa consumed, and stable isotope ratios of carbon and nitrogen to quantify the dietary niche breadth and dietary overlap. The plant taxa found indicated only limited dietary differentiation between the herbivores. As expected, dietary niches exhibited a strong contraction from early to late winter, especially for rock ptarmigan. This may indicate increasing reliance on particular plant resources as winter progresses. In early winter, the diet of rock ptarmigan overlapped slightly with that of muskox and Arctic hare. Contrary to our expectations, no inter-specific dietary niche overlap was observed in late winter. This overall pattern was specifically revealed by combined analysis of molecular data and stable isotope contents. Hence, despite foraging in the same areas and generally feeding on the same plant taxa, the quantitative dietary overlap between the three herbivores was limited. This may be attributable to species-specific consumption rates of plant taxa. Yet, Arctic hare and rock ptarmigan may benefit from muskox opening up the snow pack, thereby allowing them to access the plants.

  2. Arctic Climate Systems Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ivey, Mark D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Backus, George A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peterson, Kara J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); van Bloemen Waanders, Bart G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Desilets, Darin Maurice [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reinert, Rhonda Karen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    This study began with a challenge from program area managers at Sandia National Laboratories to technical staff in the energy, climate, and infrastructure security areas: apply a systems-level perspective to existing science and technology program areas in order to determine technology gaps, identify new technical capabilities at Sandia that could be applied to these areas, and identify opportunities for innovation. The Arctic was selected as one of these areas for systems level analyses, and this report documents the results. In this study, an emphasis was placed on the arctic atmosphere since Sandia has been active in atmospheric research in the Arctic since 1997. This study begins with a discussion of the challenges and benefits of analyzing the Arctic as a system. It goes on to discuss current and future needs of the defense, scientific, energy, and intelligence communities for more comprehensive data products related to the Arctic; assess the current state of atmospheric measurement resources available for the Arctic; and explain how the capabilities at Sandia National Laboratories can be used to address the identified technological, data, and modeling needs of the defense, scientific, energy, and intelligence communities for Arctic support.

  3. Printing method and printer used for applying this method

    NARCIS (Netherlands)

    2006-01-01

    The invention pertains to a method for transferring ink to a receiving material using an inkjet printer having an ink chamber (10) with a nozzle (8) and an electromechanical transducer (16) in cooperative connection with the ink chamber, comprising actuating the transducer to generate a pressure

  4. Ethics, Collaboration, and Presentation Methods for Local and Traditional Knowledge for Understanding Arctic Change

    Science.gov (United States)

    Parsons, M. A.; Gearheard, S.; McNeave, C.

    2009-12-01

    Local and traditional knowledge (LTK) provides rich information about the Arctic environment at spatial and temporal scales that scientific knowledge often does not have access to (e.g. localized observations of fine-scale ecological change potentially from many different communities, or local sea ice and conditions prior to 1950s ice charts and 1970s satellite records). Community-based observations and monitoring are an opportunity for Arctic residents to provide ‘frontline’ observations and measurements that are an early warning system for Arctic change. The Exchange for Local Observations and Knowledge of the Arctic (ELOKA) was established in response to the growing number of community-based and community-oriented research and observation projects in the Arctic. ELOKA provides data management and user support to facilitate the collection, preservation, exchange, and use of local observations and knowledge. Managing these data presents unique ethical challenges in terms of appropriate use of rare human knowledge and ensuring that knowledge is not lost from the local communities and not exploited in ways antithetical to community culture and desires. Local Arctic residents must be engaged as true collaborative partners while respecting their perspectives, which may vary substantially from a western science perspective. At the same time, we seek to derive scientific meaning from the local knowledge that can be used in conjunction with quantitative science data. This creates new challenges in terms of data presentation, knowledge representations, and basic issues of metadata. This presentation reviews these challenges, some initial approaches to addressing them, and overall lessons learned and future directions.

  5. Determining the Diversity and Species Abundance Patterns in Arctic Soils using Rational Methods for Exploring Microbial Diversity

    Science.gov (United States)

    Ovreas, L.; Quince, C.; Sloan, W.; Lanzen, A.; Davenport, R.; Green, J.; Coulson, S.; Curtis, T.

    2012-12-01

    Arctic microbial soil communities are intrinsically interesting and poorly characterised. We have inferred the diversity and species abundance distribution of 6 Arctic soils: new and mature soil at the foot of a receding glacier, Arctic Semi Desert, the foot of bird cliffs and soil underlying Arctic Tundra Heath: all near Ny-Ålesund, Spitsbergen. Diversity, distribution and sample sizes were estimated using the rational method of Quince et al., (Isme Journal 2 2008:997-1006) to determine the most plausible underlying species abundance distribution. A log-normal species abundance curve was found to give a slightly better fit than an inverse Gaussian curve if, and only if, sequencing error was removed. The median estimates of diversity of operational taxonomic units (at the 3% level) were 3600-5600 (lognormal assumed) and 2825-4100 (inverse Gaussian assumed). The nature and origins of species abundance distributions are poorly understood but may yet be grasped by observing and analysing such distributions in the microbial world. The sample size required to observe the distribution (by sequencing 90% of the taxa) varied between ~ 106 and ~105 for the lognormal and inverse Gaussian respectively. We infer that between 5 and 50 GB of sequencing would be required to capture 90% or the metagenome. Though a principle components analysis clearly divided the sites into three groups there was a high (20-45%) degree of overlap in between locations irrespective of geographical proximity. Interestingly, the nearest relatives of the most abundant taxa at a number of most sites were of alpine or polar origin. Samples plotted on first two principal components together with arbitrary discriminatory OTUs

  6. Arctic Newcomers

    DEFF Research Database (Denmark)

    Tonami, Aki

    2013-01-01

    Interest in the Arctic region and its economic potential in Japan, South Korea and Singapore was slow to develop but is now rapidly growing. All three countries have in recent years accelerated their engagement with Arctic states, laying the institutional frameworks needed to better understand...... and influence policies relating to the Arctic. But each country’s approach is quite different, writes Aki Tonami....

  7. Discrimination symbol applying method for sintered nuclear fuel product

    International Nuclear Information System (INIS)

    Ishizaki, Jin

    1998-01-01

    The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)

  8. Building "Applied Linguistic Historiography": Rationale, Scope, and Methods

    Science.gov (United States)

    Smith, Richard

    2016-01-01

    In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…

  9. Applying Mixed Methods Research at the Synthesis Level: An Overview

    Science.gov (United States)

    Heyvaert, Mieke; Maes, Bea; Onghena, Patrick

    2011-01-01

    Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…

  10. Application of data assimilation methods for analysis and integration of observed and modeled Arctic Sea ice motions

    Science.gov (United States)

    Meier, Walter Neil

    This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an

  11. Quantitative EEG Applying the Statistical Recognition Pattern Method

    DEFF Research Database (Denmark)

    Engedal, Knut; Snaedal, Jon; Hoegh, Peter

    2015-01-01

    BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...

  12. Electronic-projecting Moire method applying CBR-technology

    Science.gov (United States)

    Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.

    2018-01-01

    Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.

  13. A Lagrangian meshfree method applied to linear and nonlinear elasticity.

    Science.gov (United States)

    Walker, Wade A

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.

  14. Application of a Real-time Reverse Transcription Loop Mediated Amplification Method to the Detection of Rabies Virus in Arctic Foxes in Greenland

    DEFF Research Database (Denmark)

    Wakeley, Philip; Johnson, Nicholas; Rasmussen, Thomas Bruun

    Reverse transcription loop mediated amplification (RT-LAMP) offers a rapid, isothermal method for amplification of virus RNA. In this study a panel of positive rabies virus samples originally prepared from arctic fox brain tissue was assessed for the presence of rabies viral RNA using a real time...... RT-LAMP. The method had previously been shown to work with samples from Ghana which clustered with cosmopolitan lineage rabies viruses but the assay had not been assessed using samples from animals infected with rabies from the arctic region. The assay is designed to amplify both cosmopolitan strains...... and arctic-like strains of classical rabies virus due to the primer design and is therefore expected to be universally applicable independent of region of the world where the virus is isolated. Of the samples tested all were found to be positive after incubation for 25 to 30 minutes. The method made use...

  15. Applying the Taguchi method for optimized fabrication of bovine ...

    African Journals Online (AJOL)

    SERVER

    2008-02-19

    Feb 19, 2008 ... Nanobiotechnology Research Lab., School of Chemical Engineering, Babol University of Technology, Po.Box: 484, ... nanoparticle by applying the Taguchi method with characterization of the ... of BSA/ethanol and organic solvent adding rate. ... Sodium aside and all other chemicals were purchased from.

  16. Nudging the Arctic Ocean to quantify Arctic sea ice feedbacks

    Science.gov (United States)

    Dekker, Evelien; Severijns, Camiel; Bintanja, Richard

    2017-04-01

    It is well-established that the Arctic is warming 2 to 3 time faster than rest of the planet. One of the great uncertainties in climate research is related to what extent sea ice feedbacks amplify this (seasonally varying) Arctic warming. Earlier studies have analyzed existing climate model output using correlations and energy budget considerations in order to quantify sea ice feedbacks through indirect methods. From these analyses it is regularly inferred that sea ice likely plays an important role, but details remain obscure. Here we will take a different and a more direct approach: we will keep the sea ice constant in a sensitivity simulation, using a state-of -the-art climate model (EC-Earth), applying a technique that has never been attempted before. This experimental technique involves nudging the temperature and salinity of the ocean surface (and possibly some layers below to maintain the vertical structure and mixing) to a predefined prescribed state. When strongly nudged to existing (seasonally-varying) sea surface temperatures, ocean salinity and temperature, we force the sea ice to remain in regions/seasons where it is located in the prescribed state, despite the changing climate. Once we obtain fixed' sea ice, we will run a future scenario, for instance 2 x CO2 with and without prescribed sea ice, with the difference between these runs providing a measure as to what extent sea ice contributes to Arctic warming, including the seasonal and geographical imprint of the effects.

  17. Aircraft operability methods applied to space launch vehicles

    Science.gov (United States)

    Young, Douglas

    1997-01-01

    The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.

  18. Magnetic stirring welding method applied to nuclear power plant

    International Nuclear Information System (INIS)

    Hirano, Kenji; Watando, Masayuki; Morishige, Norio; Enoo, Kazuhide; Yasuda, Yuuji

    2002-01-01

    In construction of a new nuclear power plant, carbon steel and stainless steel are used as base materials for the bottom linear plate of Reinforced Concrete Containment Vessel (RCCV) to achieve maintenance-free requirement, securing sufficient strength of structure. However, welding such different metals is difficult by ordinary method. To overcome the difficulty, the automated Magnetic Stirring Welding (MSW) method that can demonstrate good welding performance was studied for practical use, and weldability tests showed the good results. Based on the study, a new welding device for the MSW method was developed to apply it weld joints of different materials, and it practically used in part of a nuclear power plant. (author)

  19. Linear algebraic methods applied to intensity modulated radiation therapy.

    Science.gov (United States)

    Crooks, S M; Xing, L

    2001-10-01

    Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

  20. Methods of applied mathematics with a software overview

    CERN Document Server

    Davis, Jon H

    2016-01-01

    This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...

  1. Arctic Security

    DEFF Research Database (Denmark)

    Wang, Nils

    2013-01-01

    The inclusion of China, India, Japan, Singapore and Italy as permanent observers in the Arctic Council has increased the international status of this forum significantly. This chapter aims to explain the background for the increased international interest in the Arctic region through an analysis...

  2. Which DTW Method Applied to Marine Univariate Time Series Imputation

    OpenAIRE

    Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André

    2017-01-01

    International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...

  3. Applying Qualitative Research Methods to Narrative Knowledge Engineering

    OpenAIRE

    O'Neill, Brian; Riedl, Mark

    2014-01-01

    We propose a methodology for knowledge engineering for narrative intelligence systems, based on techniques used to elicit themes in qualitative methods research. Our methodology uses coding techniques to identify actions in natural language corpora, and uses these actions to create planning operators and procedural knowledge, such as scripts. In an iterative process, coders create a taxonomy of codes relevant to the corpus, and apply those codes to each element of that corpus. These codes can...

  4. APPLYING SPECTROSCOPIC METHODS ON ANALYSES OF HAZARDOUS WASTE

    OpenAIRE

    Dobrinić, Julijan; Kunić, Marija; Ciganj, Zlatko

    2000-01-01

    Abstract The paper presents results of measuring the content of heavy and other metals in waste samples from the hazardous waste disposal site of Sovjak near Rijeka. The preliminary design elaboration and the choice of the waste disposal sanification technology were preceded by the sampling and physico-chemical analyses of disposed waste, enabling its categorization. The following spectroscopic methods were applied on metal content analysis: Atomic absorption spectroscopy (AAS) and plas...

  5. A new method of AHP applied to personal credit evaluation

    Institute of Scientific and Technical Information of China (English)

    JIANG Ming-hui; XIONG Qi; CAO Jing

    2006-01-01

    This paper presents a new negative judgment matrix that combines the advantages of the reciprocal judgment matrix and the fuzzy complementary judgment matrix, and then puts forth the properties of this new matrix. In view of these properties, this paper derives a clear sequencing formula for the new negative judgment matrix, which improves the sequencing principle of AHP. Finally, this new method is applied to personal credit evaluation to show its advantages of conciseness and swiftness.

  6. Novel biodosimetry methods applied to victims of the Goiania accident

    International Nuclear Information System (INIS)

    Straume, T.; Langlois, R.G.; Lucas, J.; Jensen, R.H.; Bigbee, W.L.; Ramalho, A.T.; Brandao-Mello, C.E.

    1991-01-01

    Two biodosimetric methods under development at the Lawrence Livermore National Laboratory were applied to five persons accidentally exposed to a 137Cs source in Goiania, Brazil. The methods used were somatic null mutations at the glycophorin A locus detected as missing proteins on the surface of blood erythrocytes and chromosome translocations in blood lymphocytes detected using fluorescence in-situ hybridization. Biodosimetric results obtained approximately 1 y after the accident using these new and largely unvalidated methods are in general agreement with results obtained immediately after the accident using dicentric chromosome aberrations. Additional follow-up of Goiania accident victims will (1) help provide the information needed to validate these new methods for use in biodosimetry and (2) provide independent estimates of dose

  7. Newton-Krylov methods applied to nonequilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Knoll, D.A.; Rider, W.J.; Olsen, G.L.

    1998-01-01

    The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step

  8. Gust factor based on research aircraft measurements: A new methodology applied to the Arctic marine boundary layer

    DEFF Research Database (Denmark)

    Suomi, Irene; Lüpkes, Christof; Hartmann, Jörg

    2016-01-01

    There is as yet no standard methodology for measuring wind gusts from a moving platform. To address this, we have developed a method to derive gusts from research aircraft data. First we evaluated four different approaches, including Taylor's hypothesis of frozen turbulence, to derive the gust...... in unstable conditions (R2=0.52). The mean errors for all methods were low, from -0.02 to 0.05, indicating that wind gust factors can indeed be measured from research aircraft. Moreover, we showed that aircraft can provide gust measurements within the whole boundary layer, if horizontal legs are flown...

  9. GPS surveying method applied to terminal area navigation flight experiments

    Energy Technology Data Exchange (ETDEWEB)

    Murata, M; Shingu, H; Satsushima, K; Tsuji, T; Ishikawa, K; Miyazawa, Y; Uchida, T [National Aerospace Laboratory, Tokyo (Japan)

    1993-03-01

    With an objective of evaluating accuracy of new landing and navigation systems such as microwave landing guidance system and global positioning satellite (GPS) system, flight experiments are being carried out using experimental aircraft. This aircraft mounts a GPS and evaluates its accuracy by comparing the standard orbits spotted by a Kalman filter from the laser tracing data on the aircraft with the navigation results. The GPS outputs position and speed information from an earth-centered-earth-fixed system called the World Geodetic System, 1984 (WGS84). However, in order to compare the navigation results with output from a reference orbit sensor or other navigation sensor, it is necessary to structure a high-precision reference coordinates system based on the WGS84. A method that applies the GPS phase interference measurement for this problem was proposed, and used actually in analyzing a flight experiment data. As referred to a case of the method having been applied to evaluating an independent navigation accuracy, the method was verified sufficiently effective and reliable not only in navigation method analysis, but also in the aspect of navigational operations. 12 refs., 10 figs., 5 tabs.

  10. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  11. Analysis of concrete beams using applied element method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

  12. The Lattice Boltzmann Method applied to neutron transport

    International Nuclear Information System (INIS)

    Erasmus, B.; Van Heerden, F. A.

    2013-01-01

    In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)

  13. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  14. Classification of Specialized Farms Applying Multivariate Statistical Methods

    Directory of Open Access Journals (Sweden)

    Zuzana Hloušková

    2017-01-01

    Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.

  15. Metrological evaluation of characterization methods applied to nuclear fuels

    International Nuclear Information System (INIS)

    Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho

    2010-01-01

    In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the

  16. Nuclear and nuclear related analytical methods applied in environmental research

    International Nuclear Information System (INIS)

    Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.

    2010-01-01

    Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)

  17. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  18. Analysis of Brick Masonry Wall using Applied Element Method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.

  19. Thermally stimulated current method applied to highly irradiated silicon diodes

    CERN Document Server

    Pintilie, I; Pintilie, I; Moll, Michael; Fretwurst, E; Lindström, G

    2002-01-01

    We propose an improved method for the analysis of Thermally Stimulated Currents (TSC) measured on highly irradiated silicon diodes. The proposed TSC formula for the evaluation of a set of TSC spectra obtained with different reverse biases leads not only to the concentration of electron and hole traps visible in the spectra but also gives an estimation for the concentration of defects which not give rise to a peak in the 30-220 K TSC temperature range (very shallow or very deep levels). The method is applied to a diode irradiated with a neutron fluence of phi sub n =1.82x10 sup 1 sup 3 n/cm sup 2.

  20. Hybrid electrokinetic method applied to mix contaminated soil

    Energy Technology Data Exchange (ETDEWEB)

    Mansour, H.; Maria, E. [Dept. of Building Civil and Environmental Engineering, Concordia Univ., Montreal (Canada)

    2001-07-01

    Several industrials and municipal areas in North America are contaminated with heavy metals and petroleum products. This mix contamination presents a particularly difficult task for remediation when is exposed in clayey soil. The objective of this research was to find a method to cleanup mix contaminated clayey soils. Finally, a multifunctional hybrid electrokinetic method was investigated. Clayey soil was contaminated with lead and nickel (heavy metals) at the level of 1000 ppm and phenanthrene (PAH) of 600 ppm. Electrokinetic surfactant supply system was applied to mobilize, transport and removal of phenanthrene. A chelation agent (EDTA) was also electrokinetically supplied to mobilize heavy metals. The studies were performed on 8 lab scale electrokinetic cells. The mix contaminated clayey soil was subjected to DC total voltage gradient of 0.3 V/cm. Supplied liquids (surfactant and EDTA) were introduced in different periods of time (22 days, 42 days) in order to optimize the most excessive removal of contaminants. The ph, electrical parameters, volume supplied, and volume discharged was monitored continuously during each experiment. At the end of these tests soil and cathalyte were subjected to physico-chemical analysis. The paper discusses results of experiments including the optimal energy use, removal efficiency of phenanthrene, as well, transport and removal of heavy metals. The results of this study can be applied for in-situ hybrid electrokinetic technology to remediate clayey sites contaminated with petroleum product mixed with heavy metals (e.g. manufacture Gas Plant Sites). (orig.)

  1. A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Oktay Büyükaşık

    2010-12-01

    Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31

  2. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    Science.gov (United States)

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  3. Analytical methods applied to diverse types of Brazilian propolis

    Directory of Open Access Journals (Sweden)

    Marcucci Maria

    2011-06-01

    Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.

  4. Teaching organization theory for healthcare management: three applied learning methods.

    Science.gov (United States)

    Olden, Peter C

    2006-01-01

    Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.

  5. Six Sigma methods applied to cryogenic coolers assembly line

    Science.gov (United States)

    Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René

    2009-05-01

    Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.

  6. Metrological evaluation of characterization methods applied to nuclear fuels

    Energy Technology Data Exchange (ETDEWEB)

    Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho, E-mail: kellyfisica@gmail.co, E-mail: fernando.lameiras@pq.cnpq.b, E-mail: dmc@cdtn.b, E-mail: ranf@cdtn.b, E-mail: flmigliorini@hotmail.co, E-mail: lucsc@hotmail.co, E-mail: egonn@ufmg.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2010-07-01

    In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO{sub 2} that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO{sub 2} samples were focused. The thermal characterization of UO{sub 2} samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of

  7. Applying systems ergonomics methods in sport: A systematic review.

    Science.gov (United States)

    Hulme, Adam; Thompson, Jason; Plant, Katherine L; Read, Gemma J M; Mclean, Scott; Clacy, Amanda; Salmon, Paul M

    2018-04-16

    As sports systems become increasingly more complex, competitive, and technology-centric, there is a greater need for systems ergonomics methods to consider the performance, health, and safety of athletes in context with the wider settings in which they operate. Therefore, the purpose of this systematic review was to identify and critically evaluate studies which have applied a systems ergonomics research approach in the context of sports performance and injury management. Five databases (PubMed, Scopus, ScienceDirect, Web of Science, and SPORTDiscus) were searched for the dates 01 January 1990 to 01 August 2017, inclusive, for original peer-reviewed journal articles and conference papers. Reported analyses were underpinned by a recognised systems ergonomics method, and study aims were related to the optimisation of sports performance (e.g. communication, playing style, technique, tactics, or equipment), and/or the management of sports injury (i.e. identification, prevention, or treatment). A total of seven articles were identified. Two articles were focussed on understanding and optimising sports performance, whereas five examined sports injury management. The methods used were the Event Analysis of Systemic Teamwork, Cognitive Work Analysis (the Work Domain Analysis Abstraction Hierarchy), Rasmussen's Risk Management Framework, and the Systems Theoretic Accident Model and Processes method. The individual sport application was distance running, whereas the team sports contexts examined were cycling, football, Australian Football League, and rugby union. The included systems ergonomics applications were highly flexible, covering both amateur and elite sports contexts. The studies were rated as valuable, providing descriptions of injury controls and causation, the factors influencing injury management, the allocation of responsibilities for injury prevention, as well as the factors and their interactions underpinning sports performance. Implications and future

  8. The virtual fields method applied to spalling tests on concrete

    Directory of Open Access Journals (Sweden)

    Forquin P.

    2012-08-01

    Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.

  9. The Arctic

    International Nuclear Information System (INIS)

    Petersen, H.; Meltofte, H.; Rysgaard, S.; Rasch, M.; Jonasson, S.; Christensen, T.R.; Friborg, T.; Soegaard, H.; Pedersen, S.A.

    2001-01-01

    Global climate change in the Arctic is a growing concern. Research has already documented pronounced changes, and models predict that increases in temperature from anthropogenic influences could be considerably higher than the global average. The impacts of climate change on Arctic ecosystems are complex and difficult to predict because of the many interactions within ecosystem, and between many concurrently changing environmental variables. Despite the global consequences of change in the Arctic climate the monitoring of basic abiotic as well as biotic parameters are not adequate to assess the impact of global climate change. The uneven geographical location of present monitoring stations in the Arctic limits the ability to understand the climate system. The impact of previous variations and potential future changes to ecosystems is not well understood and need to be addressed. At this point, there is no consensus of scientific opinion on how much of the current changes that are due to anthropogenic influences or to natural variation. Regardless of the cause, there is a need to investigate and assess current observations and their effects to the Arctic. In this chapter examples from both terrestrial and marine ecosystems from ongoing monitoring and research projects are given. (LN)

  10. Simulation of optimal arctic routes using a numerical sea ice model based on an ice-coupled ocean circulation method

    OpenAIRE

    Jong-Ho Nam; Inha Park; Ho Jin Lee; Mi Ok Kwon; Kyungsik Choi; Young-Kyo Seo

    2013-01-01

    Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the o...

  11. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    Science.gov (United States)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS

  12. Applying sociodramatic methods in teaching transition to palliative care.

    Science.gov (United States)

    Baile, Walter F; Walters, Rebecca

    2013-03-01

    We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  13. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled

  14. Analytic methods in applied probability in memory of Fridrikh Karpelevich

    CERN Document Server

    Suhov, Yu M

    2002-01-01

    This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable

  15. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  16. Mapping pan-Arctic CH4 emissions using an adjoint method by integrating process-based wetland and lake biogeochemical models and atmospheric CH4 concentrations

    Science.gov (United States)

    Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E. J.; Sweeney, C.; Turner, A. J.

    2015-12-01

    Understanding CH4 emissions from wetlands and lakes are critical for the estimation of Arctic carbon balance under fast warming climatic conditions. To date, our knowledge about these two CH4 sources is almost solely built on the upscaling of discontinuous measurements in limited areas to the whole region. Many studies indicated that, the controls of CH4 emissions from wetlands and lakes including soil moisture, lake morphology and substrate content and quality are notoriously heterogeneous, thus the accuracy of those simple estimates could be questionable. Here we apply a high spatial resolution atmospheric inverse model (nested-grid GEOS-Chem Adjoint) over the Arctic by integrating SCIAMACHY and NOAA/ESRL CH4 measurements to constrain the CH4 emissions estimated with process-based wetland and lake biogeochemical models. Our modeling experiments using different wetland CH4 emission schemes and satellite and surface measurements show that the total amount of CH4 emitted from the Arctic wetlands is well constrained, but the spatial distribution of CH4 emissions is sensitive to priors. For CH4 emissions from lakes, our high-resolution inversion shows that the models overestimate CH4 emissions in Alaskan costal lowlands and East Siberian lowlands. Our study also indicates that the precision and coverage of measurements need to be improved to achieve more accurate high-resolution estimates.

  17. Information security of power enterprises of North-Arctic region

    Science.gov (United States)

    Sushko, O. P.

    2018-05-01

    The role of information technologies in providing technological security for energy enterprises is a component of the economic security for the northern Arctic region in general. Applying instruments and methods of information protection modelling of the energy enterprises' business process in the northern Arctic region (such as Arkhenergo and Komienergo), the authors analysed and identified most frequent risks of information security. With the analytic hierarchy process based on weighting factor estimations, information risks of energy enterprises' technological processes were ranked. The economic estimation of the information security within an energy enterprise considers weighting factor-adjusted variables (risks). Investments in information security systems of energy enterprises in the northern Arctic region are related to necessary security elements installation; current operating expenses on business process protection systems become materialized economic damage.

  18. Moisture transport and Atmospheric circulation in the Arctic

    Science.gov (United States)

    Woods, Cian; Caballero, Rodrigo

    2013-04-01

    Cyclones are an important feature of the Mid-Latitudes and Arctic Climates. They are a main transporter of warm moist energy from the sub tropics to the poles. The Arctic Winter is dominated by highly stable conditions for most of the season due to a low level temperature inversion caused by a radiation deficit at the surface. This temperature inversion is a ubiquitous feature of the Arctic Winter Climate and can persist for up to weeks at a time. The inversion can be destroyed during the passage of a cyclone advecting moisture and warming the surface. In the absence of an inversion, and in the presence of this warm moist air mass, clouds can form quite readily and as such influence the radiative processes and energy budget of the Arctic. Wind stress caused by a passing cyclones also has the tendency to cause break-up of the ice sheet by induced rotation, deformation and divergence at the surface. For these reasons, we wish to understand the mechanisms of warm moisture advection into the Arctic from lower latitudes and how these mechanisms are controlled. The body of work in this area has been growing and gaining momentum in recent years (Stramler et al. 2011; Morrison et al. 2012; Screen et al. 2011). However, there has been no in depth analysis of the underlying dynamics to date. Improving our understanding of Arctic dynamics becomes increasingly important in the context of climate change. Many models agree that a northward shift of the storm track is likely in the future, which could have large impacts in the Arctic, particularly the sea ice. A climatology of six-day forward and backward trajectories starting from multiple heights around 70 N is constructed using the 22 year ECMWF reanalysis dataset (ERA-INT). The data is 6 hourly with a horizontal resolution of 1 degree on 16 pressure levels. Our methodology here is inspired by previous studies examining flow patterns through cyclones in the mid-latitudes. We apply these earlier mid-latitude methods in the

  19. Simulation of optimal arctic routes using a numerical sea ice model based on an ice-coupled ocean circulation method

    Directory of Open Access Journals (Sweden)

    Jong-Ho Nam

    2013-06-01

    Full Text Available Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.

  20. State of the Arctic Environment

    International Nuclear Information System (INIS)

    1990-01-01

    The Arctic environment, covering about 21 million km 2 , is in this connection regarded as the area north of the Arctic Circle. General biological and physical features of the terrestrial and freshwater environments of the Arctic are briefly described, but most effort is put into a description of the marine part which constitutes about two-thirds of the total Arctic environment. General oceanography and morphological characteristics are included; e.g. that the continental shelf surrounding the Arctic deep water basins covers approximately 36% of the surface areas of Arctic waters, but contains only 2% of the total water masses. Blowout accident may release thousands of tons of oil per day and last for months. They occur statistically very seldom, but the magnitude underlines the necessity of an efficient oil spill contingency as well as sound safety and quality assurance procedures. Contingency plans should be coordinated and regularly evaluated through simulated and practical tests of performance. Arctic conditions demand alternative measures compared to those otherwise used for oil spill prevention and clean-up. New concepts or optimization of existing mechanical equipment is necessary. Chemical and thermal methods should be evaluated for efficiency and possible environmental effects. Both due to regular discharges of oil contaminated drilled cuttings and the possibility of a blowout or other spills, drilling operations in biological sensitive areas may be regulated to take place only during the less sensitive parts of the year. 122 refs., 8 figs., 8 tabs

  1. Valuing national effects of digital health investments: an applied method.

    Science.gov (United States)

    Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad

    2015-01-01

    This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.

  2. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  3. Chapter 8: US geological survey Circum-Arctic Resource Appraisal (CARA): Introduction and summary of organization and methods

    Science.gov (United States)

    Charpentier, R.R.; Gautier, D.L.

    2011-01-01

    The USGS has assessed undiscovered petroleum resources in the Arctic through geological mapping, basin analysis and quantitative assessment. The new map compilation provided the base from which geologists subdivided the Arctic for burial history modelling and quantitative assessment. The CARA was a probabilistic, geologically based study that used existing USGS methodology, modified somewhat for the circumstances of the Arctic. The assessment relied heavily on analogue modelling, with numerical input as lognormal distributions of sizes and numbers of undiscovered accumulations. Probabilistic results for individual assessment units were statistically aggregated taking geological dependencies into account. Fourteen papers in this Geological Society volume present summaries of various aspects of the CARA. ?? 2011 The Geological Society of London.

  4. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    Several different methods have been used to sense load currents and extract its ... in order to produce a reference current in shunt active power filters (SAPF), and ... technique compared to other similar methods are found quite satisfactory by ...

  5. Muon radiography method for fundamental and applied research

    Science.gov (United States)

    Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.

    2017-12-01

    This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.

  6. Methodical Aspects of Applying Strategy Map in an Organization

    OpenAIRE

    Piotr Markiewicz

    2013-01-01

    One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC). The method was c...

  7. Classical and modular methods applied to Diophantine equations

    NARCIS (Netherlands)

    Dahmen, S.R.

    2008-01-01

    Deep methods from the theory of elliptic curves and modular forms have been used to prove Fermat's last theorem and solve other Diophantine equations. These so-called modular methods can often benefit from information obtained by other, classical, methods from number theory; and vice versa. In our

  8. The pseudo-harmonics method applied to depletion calculation

    International Nuclear Information System (INIS)

    Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.

    1989-01-01

    In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt

  9. Connecting Arctic Research Across Boundaries through the Arctic Research Consortium of the United States (ARCUS)

    Science.gov (United States)

    Rich, R. H.; Myers, B.; Wiggins, H. V.; Zolkos, J.

    2017-12-01

    The complexities inherent in Arctic research demand a unique focus on making connections across the boundaries of discipline, institution, sector, geography, knowledge system, and culture. Since 1988, ARCUS has been working to bridge these gaps through communication, coordination, and collaboration. Recently, we have worked with partners to create a synthesis of the Arctic system, to explore the connectivity across the Arctic research community and how to strengthen it, to enable the community to have an effective voice in research funding policy, to implement a system for Arctic research community knowledge management, to bridge between global Sea Ice Prediction Network researchers and the science needs of coastal Alaska communities through the Sea Ice for Walrus Outlook, to strengthen ties between Polar researchers and educators, and to provide essential intangible infrastructure that enables cost-effective and productive research across boundaries. Employing expertise in managing for collaboration and interdisciplinarity, ARCUS complements and enables the work of its members, who constitute the Arctic research community and its key stakeholders. As a member-driven organization, everything that ARCUS does is achieved through partnership, with strong volunteer leadership of each activity. Key organizational partners in the United States include the U.S. Arctic Research Commission, Interagency Arctic Research Policy Committee, National Academy of Sciences Polar Research Board, and the North Slope Science Initiative. Internationally, ARCUS maintains strong bilateral connections with similarly focused groups in each Arctic country (and those interested in the Arctic), as well as with multinational organizations including the International Arctic Science Committee, the Association of Polar Early Career Educators, the University of the Arctic, and the Arctic Institute of North America. Currently, ARCUS is applying the best practices of the science of team science

  10. Waste classification and methods applied to specific disposal sites

    International Nuclear Information System (INIS)

    Rogers, V.C.

    1979-01-01

    An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs

  11. nuclear and atomic methods applied in the determination of some

    African Journals Online (AJOL)

    NAA is a quantitative and qualitative method for the precise determination of a number of major, minor and trace elements in different types of geological, environmental and biological samples. It is based on nuclear reaction between neutron and target nuclei of a sample material. It is a useful method for the simultaneous.

  12. Instructions for applying inverse method for reactivity measurement

    International Nuclear Information System (INIS)

    Milosevic, M.

    1988-11-01

    This report is a brief description of the completed method for reactivity measurement. It contains description of the experimental procedure needed instrumentation and computer code IM for determining reactivity. The objective of this instructions manual is to enable experiments and reactivity measurement on any critical system according to the methods adopted at the RB reactor

  13. The spectral volume method as applied to transport problems

    International Nuclear Information System (INIS)

    McClarren, Ryan G.

    2011-01-01

    We present a new spatial discretization for transport problems: the spectral volume method. This method, rst developed by Wang for computational fluid dynamics, divides each computational cell into several sub-cells and enforces particle balance on each of these sub-cells. Also, these sub-cells are used to build a polynomial reconstruction in the cell. The idea of dividing cells into many cells is a generalization of the simple corner balance and other similar schemes. The spectral volume method preserves particle conservation and preserves the asymptotic diffusion limit. We present results from the method on two transport problems in slab geometry using discrete ordinates and second through sixth order spectral volume schemes. The numerical results demonstrate the accuracy and preservation of the diffusion limit of the spectral volume method. Future work will explore possible bene ts of the scheme for high-performance computing and for resolving diffusive boundary layers. (author)

  14. Literature Review of Applying Visual Method to Understand Mathematics

    Directory of Open Access Journals (Sweden)

    Yu Xiaojuan

    2015-01-01

    Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.

  15. Arctic Ocean data in CARINA

    Directory of Open Access Journals (Sweden)

    S. Jutterström

    2010-02-01

    Full Text Available The paper describes the steps taken for quality controlling chosen parameters within the Arctic Ocean data included in the CARINA data set and checking for offsets between the individual cruises. The evaluated parameters are the inorganic carbon parameters (total dissolved inorganic carbon, total alkalinity and pH, oxygen and nutrients: nitrate, phosphate and silicate. More parameters can be found in the CARINA data product, but were not subject to a secondary quality control. The main method in determining offsets between cruises was regional multi-linear regression, after a first rough basin-wide deep-water estimate of each parameter. Lastly, the results of the secondary quality control are discussed as well as applied adjustments.

  16. Methodical Aspects of Applying Strategy Map in an Organization

    Directory of Open Access Journals (Sweden)

    Piotr Markiewicz

    2013-06-01

    Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.

  17. Applying a life cycle approach to project management methods

    OpenAIRE

    Biggins, David; Trollsund, F.; Høiby, A.L.

    2016-01-01

    Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...

  18. Method for curing alkyd resin compositions by applying ionizing radiation

    International Nuclear Information System (INIS)

    Watanabe, T.; Murata, K.; Maruyama, T.

    1975-01-01

    An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)

  19. The integral equation method applied to eddy currents

    International Nuclear Information System (INIS)

    Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.

    1976-04-01

    An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)

  20. Apply of torque method at rationalization of work

    Directory of Open Access Journals (Sweden)

    Bandurová Miriam

    2001-03-01

    Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.

  1. Thermoluminescence as a dating method applied to the Morocco Neolithic

    International Nuclear Information System (INIS)

    Ousmoi, M.

    1989-09-01

    Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr

  2. Modal method for crack identification applied to reactor recirculation pump

    International Nuclear Information System (INIS)

    Miller, W.H.; Brook, R.

    1991-01-01

    Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data

  3. Boron autoradiography method applied to the study of steels

    International Nuclear Information System (INIS)

    Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.

    1986-01-01

    The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es

  4. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  5. Applying Nyquist's method for stability determination to solar wind observations

    Science.gov (United States)

    Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.

    2017-10-01

    The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.

  6. Efficient electronic structure methods applied to metal nanoparticles

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth

    of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...

  7. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  8. Non-perturbative methods applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1982-09-01

    The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt

  9. On second quantization methods applied to classical statistical mechanics

    International Nuclear Information System (INIS)

    Matos Neto, A.; Vianna, J.D.M.

    1984-01-01

    A method of expressing statistical classical results in terms of mathematical entities usually associated to quantum field theoretical treatment of many particle systems (Fock space, commutators, field operators, state vector) is discussed. It is developed a linear response theory using the 'second quantized' Liouville equation introduced by Schonberg. The relationship of this method to that of Prigogine et al. is briefly analyzed. The chain of equations and the spectral representations for the new classical Green's functions are presented. Generalized operators defined on Fock space are discussed. It is shown that the correlation functions can be obtained from Green's functions defined with generalized operators. (Author) [pt

  10. Review of PCMS and heat transfer enhancement methods applied ...

    African Journals Online (AJOL)

    Most available PCMs have low thermal conductivity making heat transfer enhancement necessary for power applications. The various methods of heat transfer enhancement in latent heat storage systems were also reviewed systematically. The review showed that three commercially - available PCMs are suitable in the ...

  11. E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS

    Directory of Open Access Journals (Sweden)

    GOANTA Adrian Mihai

    2011-11-01

    Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.

  12. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  13. Probabilist methods applied to electric source problems in nuclear safety

    International Nuclear Information System (INIS)

    Carnino, A.; Llory, M.

    1979-01-01

    Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr

  14. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  15. Applying probabilistic methods for assessments and calculations for accident prevention

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de

  16. Applying flow chemistry: methods, materials, and multistep synthesis.

    Science.gov (United States)

    McQuade, D Tyler; Seeberger, Peter H

    2013-07-05

    The synthesis of complex molecules requires control over both chemical reactivity and reaction conditions. While reactivity drives the majority of chemical discovery, advances in reaction condition control have accelerated method development/discovery. Recent tools include automated synthesizers and flow reactors. In this Synopsis, we describe how flow reactors have enabled chemical advances in our groups in the areas of single-stage reactions, materials synthesis, and multistep reactions. In each section, we detail the lessons learned and propose future directions.

  17. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    Science.gov (United States)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  18. The colour analysis method applied to homogeneous rocks

    Directory of Open Access Journals (Sweden)

    Halász Amadé

    2015-12-01

    Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.

  19. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  20. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  1. Evaluation of Slow Release Fertilizer Applying Chemical and Spectroscopic methods

    International Nuclear Information System (INIS)

    AbdEl-Kader, A.A.; Al-Ashkar, E.A.

    2005-01-01

    Controlled-release fertilizer offers a number of advantages in relation to crop production in newly reclaimed soils. Butadiene styrene latex emulsion is one of the promising polymer for different purposes. In this work, laboratory evaluation of butadiene styrene latex emulsion 24/76 polymer loaded with a mixed fertilizer was carried out. Macro nutrients (N, P and K) and micro-nutrients(Zn, Fe, and Cu) were extracted by basic extract from the polymer fertilizer mixtures. Micro-sampling technique was investigated and applied to measure Zn, Fe, and Cu using flame atomic absorption spectrometry in order to overcome the nebulization difficulties due to high salt content samples. The cumulative releases of macro and micro-nutrients have been assessed. From the obtained results, it is clear that the release depends on both nutrients and polymer concentration in the mixture. Macro-nutrients are released more efficient than micro-nutrients of total added. Therefore it can be used for minimizing micro-nutrients hazard in soils

  2. The lumped heat capacity method applied to target heating

    OpenAIRE

    Rickards, J.

    2013-01-01

    The temperature of metal samples was measured while they were bombarded by the beam from the a particle accelerator. The evolution of the temperature with time can be explained using the lumped heat capacity method of heat transfer. A strong dependence on the type of mounting was found. Se midió la temperatura de muestras metálicas al ser bombardeadas por el haz de iones del Acelerador Pelletron del Instituto de Física. La evolución de la temperatura con el tiempo se puede explicar usando ...

  3. Modern analytic methods applied to the art and archaeology

    International Nuclear Information System (INIS)

    Tenorio C, M. D.; Longoria G, L. C.

    2010-01-01

    The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)

  4. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  5. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  6. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    Science.gov (United States)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  7. Frequency domain methods applied to forecasting electricity markets

    International Nuclear Information System (INIS)

    Trapero, Juan R.; Pedregal, Diego J.

    2009-01-01

    The changes taking place in electricity markets during the last two decades have produced an increased interest in the problem of forecasting, either load demand or prices. Many forecasting methodologies are available in the literature nowadays with mixed conclusions about which method is most convenient. This paper focuses on the modeling of electricity market time series sampled hourly in order to produce short-term (1 to 24 h ahead) forecasts. The main features of the system are that (1) models are of an Unobserved Component class that allow for signal extraction of trend, diurnal, weekly and irregular components; (2) its application is automatic, in the sense that there is no need for human intervention via any sort of identification stage; (3) the models are estimated in the frequency domain; and (4) the robustness of the method makes possible its direct use on both load demand and price time series. The approach is thoroughly tested on the PJM interconnection market and the results improve on classical ARIMA models. (author)

  8. Interesting Developments in Testing Methods Applied to Foundation Piles

    Science.gov (United States)

    Sobala, Dariusz; Tkaczyński, Grzegorz

    2017-10-01

    Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.

  9. Arctic Shipping

    DEFF Research Database (Denmark)

    Hansen, Carsten Ørts; Grønsedt, Peter; Lindstrøm Graversen, Christian

    This report forms part of the ambitious CBS Maritime research initiative entitled “Competitive Challenges and Strategic Development Potential in Global Maritime Industries” which was launched with the generous support of the Danish Maritime Fund. The competitiveness initiative targets specific ma......, the latter aiming at developing key concepts and building up a basic industry knowledge base for further development of CBS Maritime research and teaching. This report attempts to map the opportunities and challenges for the maritime industry in an increasingly accessible Arctic Ocean...

  10. Applying Simulation Method in Formulation of Gluten-Free Cookies

    Directory of Open Access Journals (Sweden)

    Nikitina Marina

    2017-01-01

    Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.

  11. Nuclear method applied in archaeological sites at the Amazon basin

    International Nuclear Information System (INIS)

    Nicoli, Ieda Gomes; Bernedo, Alfredo Victor Bellido; Latini, Rose Mary

    2002-01-01

    The aim of this work was to use the nuclear methodology to character pottery discovered inside archaeological sites recognized with circular earth structure in Acre State - Brazil which may contribute to the research in the reconstruction of part of the pre-history of the Amazonic Basin. The sites are located mainly in the Hydrographic Basin of High Purus River. Three of them were strategic chosen to collect the ceramics: Lobao, in Sena Madureira County at north; Alto Alegre in Rio Branco County at east and Xipamanu I, in Xapuri County at south. Neutron Activation Analysis in conjunction with multivariate statistical methods were used for the ceramic characterization and classification. An homogeneous group was established by all the sherds collected from Alto Alegre and was distinct from the other two groups analyzed. Some of the sherds collected from Xipamunu I appeared in Lobao's urns, probably because they had the same fabrication process. (author)

  12. Applying Multi-Criteria Analysis Methods for Fire Risk Assessment

    Directory of Open Access Journals (Sweden)

    Pushkina Julia

    2015-11-01

    Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.

  13. Applied statistical methods in agriculture, health and life sciences

    CERN Document Server

    Lawal, Bayo

    2014-01-01

    This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.

  14. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  15. Applying Human-Centered Design Methods to Scientific Communication Products

    Science.gov (United States)

    Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.

    2016-12-01

    Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.

  16. Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.

    Science.gov (United States)

    Reyes Santos, Joost; Haimes, Yacov Y

    2004-06-01

    The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model

  17. Simplified Methods Applied to Nonlinear Motion of Spar Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haslum, Herbjoern Alf

    2000-07-01

    Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft

  18. Variational methods applied to problems of diffusion and reaction

    CERN Document Server

    Strieder, William

    1973-01-01

    This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...

  19. Nondestructive methods of analysis applied to oriental swords

    Directory of Open Access Journals (Sweden)

    Edge, David

    2015-12-01

    Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.

  20. Perturbation Method of Analysis Applied to Substitution Measurements of Buckling

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-11-15

    Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.

  1. Complexity methods applied to turbulence in plasma astrophysics

    Science.gov (United States)

    Vlahos, L.; Isliker, H.

    2016-09-01

    In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the

  2. Complementary biomarker-based methods for characterising Arctic sea ice conditions: A case study comparison between multivariate analysis and the PIP25 index

    Science.gov (United States)

    Köseoğlu, Denizcan; Belt, Simon T.; Smik, Lukas; Yao, Haoyi; Panieri, Giuliana; Knies, Jochen

    2018-02-01

    The discovery of IP25 as a qualitative biomarker proxy for Arctic sea ice and subsequent introduction of the so-called PIP25 index for semi-quantitative descriptions of sea ice conditions has significantly advanced our understanding of long-term paleo Arctic sea ice conditions over the past decade. We investigated the potential for classification tree (CT) models to provide a further approach to paleo Arctic sea ice reconstruction through analysis of a suite of highly branched isoprenoid (HBI) biomarkers in ca. 200 surface sediments from the Barents Sea. Four CT models constructed using different HBI assemblages revealed IP25 and an HBI triene as the most appropriate classifiers of sea ice conditions, achieving a >90% cross-validated classification rate. Additionally, lower model performance for locations in the Marginal Ice Zone (MIZ) highlighted difficulties in characterisation of this climatically-sensitive region. CT model classification and semi-quantitative PIP25-derived estimates of spring sea ice concentration (SpSIC) for four downcore records from the region were consistent, although agreement between proxy and satellite/observational records was weaker for a core from the west Svalbard margin, likely due to the highly variable sea ice conditions. The automatic selection of appropriate biomarkers for description of sea ice conditions, quantitative model assessment, and insensitivity to the c-factor used in the calculation of the PIP25 index are key attributes of the CT approach, and we provide an initial comparative assessment between these potentially complementary methods. The CT model should be capable of generating longer-term temporal shifts in sea ice conditions for the climatically sensitive Barents Sea.

  3. Near-infrared radiation curable multilayer coating systems and methods for applying same

    Science.gov (United States)

    Bowman, Mark P; Verdun, Shelley D; Post, Gordon L

    2015-04-28

    Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.

  4. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    Science.gov (United States)

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  5. Arctic pipeline planning design, construction, and equipment

    CERN Document Server

    Singh, Ramesh

    2013-01-01

    Utilize the most recent developments to combat challenges such as ice mechanics. The perfect companion for engineers wishing to learn state-of-the-art methods or further develop their knowledge of best practice techniques, Arctic Pipeline Planning provides a working knowledge of the technology and techniques for laying pipelines in the coldest regions of the world. Arctic Pipeline Planning provides must-have elements that can be utilized through all phases of arctic pipeline planning and construction. This includes information on how to: Solve challenges in designing arctic pipelines Protect pipelines from everyday threats such as ice gouging and permafrost Maintain safety and communication for construction workers while supporting typical codes and standards Covers such issues as land survey, trenching or above ground, environmental impact of construction Provides on-site problem-solving techniques utilized through all phases of arctic pipeline planning and construction Is packed with easy-to-read and under...

  6. Review of arctic Norwegian bioremediation research

    International Nuclear Information System (INIS)

    Sveum, P.

    1991-09-01

    Traditional oil spill onshore clean up in arctic and sub-arctic parts of Norway involves methods that are both time-consuming, and labor intensive. The applicability of the methods depends both on the environmental constraints of the area, and the availability of man-power. If oil exploration is successful this will mean that the exploitation of oil moves north into the arctic regions of Norway. This area is remote, both in terms of accessability and lack of inhabitants. The threat to natural resources that always accompanies oil activities, will move into areas that are considered vulnerable, and which are also highly valued in terms of natural resources. Contingency measures must be adapted both to be feasible and to meet the framework in which they must operate. This situation has increased the focus on alternative methods for oil spill clean-ups, especially on shorelines. SINTEF (The Foundation for Scientific and Industrial Research at the Norwegian Institute of Technology) Applied Chemistry has evaluated the application of fertilizers as a practical measure in oil spill treatment for years. Several fertilizers have been assessed, in different environments. The effect of these products is difficult to establish categorically since their efficiency seems to be greatly dependent on the environment in which the test is conducted, as well as the design of the test. The aim of this paper is to summarize and evaluate a series of tests conducted with INIPOL EAP22, an oil soluble fertilizer developed by Elf Aquitaine, and water soluble fertilizers. The paper will emphasize treatment failure and success, and point out some necessary prerequisites that must be met for fertilizers to work. 14 refs., 3 figs

  7. Review of arctic Norwegian bioremediation research

    Energy Technology Data Exchange (ETDEWEB)

    Sveum, P

    1991-09-01

    Traditional oil spill onshore clean up in arctic and sub-arctic parts of Norway involves methods that are both time-consuming, and labor intensive. The applicability of the methods depends both on the environmental constraints of the area, and the availability of man-power. If oil exploration is successful this will mean that the exploitation of oil moves north into the arctic regions of Norway. This area is remote, both in terms of accessability and lack of inhabitants. The threat to natural resources that always accompanies oil activities, will move into areas that are considered vulnerable, and which are also highly valued in terms of natural resources. Contingency measures must be adapted both to be feasible and to meet the framework in which they must operate. This situation has increased the focus on alternative methods for oil spill clean-ups, especially on shorelines. SINTEF (The Foundation for Scientific and Industrial Research at the Norwegian Institute of Technology) Applied Chemistry has evaluated the application of fertilizers as a practical measure in oil spill treatment for years. Several fertilizers have been assessed, in different environments. The effect of these products is difficult to establish categorically since their efficiency seems to be greatly dependent on the environment in which the test is conducted, as well as the design of the test. The aim of this paper is to summarize and evaluate a series of tests conducted with INIPOL EAP22, an oil soluble fertilizer developed by Elf Aquitaine, and water soluble fertilizers. The paper will emphasize treatment failure and success, and point out some necessary prerequisites that must be met for fertilizers to work. 14 refs., 3 figs.

  8. Arctic tides from GPS on sea ice

    DEFF Research Database (Denmark)

    Kildegaard Rose, Stine; Skourup, Henriette; Forsberg, René

    The presence of sea-ice in the Arctic Ocean plays a significant role in the Arctic climate. Sea ice dampens the ocean tide amplitude with the result that global tidal models which use only astronomical data perform less accurately in the polar regions. This study presents a kinematic processing o......-gauges and altimetry data. Furthermore, we prove that the geodetic reference ellipsoid WGS84, can be interpolated to the tidal defined zero level by applying geophysical corrections to the GPS data....

  9. Approaching a Postcolonial Arctic

    DEFF Research Database (Denmark)

    Jensen, Lars

    2016-01-01

    This article explores different postcolonially configured approaches to the Arctic. It begins by considering the Arctic as a region, an entity, and how the customary political science informed approaches are delimited by their focus on understanding the Arctic as a region at the service...... of the contemporary neoliberal order. It moves on to explore how different parts of the Arctic are inscribed in a number of sub-Arctic nation-state binds, focusing mainly on Canada and Denmark. The article argues that the postcolonial can be understood as a prism or a methodology that asks pivotal questions to all...... approaches to the Arctic. Yet the postcolonial itself is characterised by limitations, not least in this context its lack of interest in the Arctic, and its bias towards conventional forms of representation in art. The article points to the need to develop a more integrated critique of colonial and neo...

  10. Factors Controlling Black Carbon Deposition in Snow in the Arctic

    Science.gov (United States)

    Qi, L.; Li, Q.; He, C.; Li, Y.

    2015-12-01

    This study evaluates the sensitivity of black carbon (BC) concentration in snow in the Arctic to BC emissions, dry deposition and wet scavenging efficiency using a 3D global chemical transport model GEOS-Chem driven by meteorological field GEOS-5. With all improvements, simulated median BC concentration in snow agrees with observation (19.2 ng g-1) within 10%, down from -40% in the default GEOS-Chem. When the previously missed gas flaring emissions (mainly located in Russia) are included, the total BC emission in the Arctic increases by 70%. The simulated BC in snow increases by 1-7 ng g-1, with the largest improvement in Russia. The discrepancy of median BC in snow in the whole Arctic reduces from -40% to -20%. In addition, recent measurements of BC dry deposition velocity suggest that the constant deposition velocity of 0.03 cm s-1 over snow and ice used in the GEOS-Chem is too low. So we apply resistance-in-series method to calculate the dry deposition velocity over snow and ice and the resulted dry deposition velocity ranges from 0.03 to 0.24 cm s-1. However, the simulated total BC deposition flux in the Arctic and BC in snow does not change, because the increased dry deposition flux has been compensated by decreased wet deposition flux. However, the fraction of dry deposition to total deposition increases from 16% to 25%. This may affect the mixing of BC and snow particles and further affect the radative forcing of BC deposited in snow. Finally, we reduced the scavenging efficiency of BC in mixed-phase clouds to account for the effect of Wegener-Bergeron-Findeisen (WBF) process based on recent observations. The simulated BC concentration in snow increases by 10-100%, with the largest increase in Greenland (100%), Tromsø (50%), Alaska (40%), and Canadian Arctic (30%). Annual BC loading in the Arctic increases from 0.25 to 0.43 mg m-2 and the lifetime of BC increases from 9.2 to 16.3 days. This indicates that BC simulation in the Arctic is really sensitive to

  11. Arctic Submarine Slope Stability

    Science.gov (United States)

    Winkelmann, D.; Geissler, W.

    2010-12-01

    the consequence. Its geometrical configuration and timing is different from submarine slides on other glaciated continental margins. Thus, it raises the question whether slope stability within the Arctic Ocean is governed by processes specific to this environment. The extraordinary thick slabs (up to 1600 m) that were moved translationally during sliding rise the question on the nature of the weak layers associated with this process. Especially theories involving higher pore pressure are being challenged by this observation, because either extreme pore pressures or alternative explanations (e.g. mineralogical and/or textural) can be considered. To assess the actual submarine slope stability and failure potential in the Arctic Ocean, we propose to drill and recover weak layer material of the HYM from the adjacent intact strata by deep drilling under the framework of Integrated Ocean Drilling Program. This is the only method to recover weak layer material from the HYM, because the strata are too thick. We further propose to drill into the adjacent deforming slope to identify material properties of the layers acting as detachment and monitor the deformation.

  12. Computational problems in Arctic Research

    International Nuclear Information System (INIS)

    Petrov, I

    2016-01-01

    This article is to inform about main problems in the area of Arctic shelf seismic prospecting and exploitation of the Northern Sea Route: simulation of the interaction of different ice formations (icebergs, hummocks, and drifting ice floes) with fixed ice-resistant platforms; simulation of the interaction of icebreakers and ice- class vessels with ice formations; modeling of the impact of the ice formations on the underground pipelines; neutralization of damage for fixed and mobile offshore industrial structures from ice formations; calculation of the strength of the ground pipelines; transportation of hydrocarbons by pipeline; the problem of migration of large ice formations; modeling of the formation of ice hummocks on ice-resistant stationary platform; calculation the stability of fixed platforms; calculation dynamic processes in the water and air of the Arctic with the processing of data and its use to predict the dynamics of ice conditions; simulation of the formation of large icebergs, hummocks, large ice platforms; calculation of ridging in the dynamics of sea ice; direct and inverse problems of seismic prospecting in the Arctic; direct and inverse problems of electromagnetic prospecting of the Arctic. All these problems could be solved by up-to-date numerical methods, for example, using grid-characteristic method. (paper)

  13. EVALUATION OF METHODS FOR ESTIMATING FATIGUE PROPERTIES APPLIED TO STAINLESS STEELS AND ALUMINUM ALLOYS

    Directory of Open Access Journals (Sweden)

    Taylor Mac Intyer Fonseca Junior

    2013-12-01

    Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.

  14. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  15. An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy

    Science.gov (United States)

    Gamso, Nancy M.

    2011-01-01

    The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…

  16. Complementary variational principle method applied to thermal conductivities of a plasma in a uniform magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Sehgal, A K; Gupta, S C [Punjabi Univ., Patiala (India). Dept. of Physics

    1982-12-14

    The complementary variational principles method (CVP) is applied to the thermal conductivities of a plasma in a uniform magnetic field. The results of computations show that the CVP derived results are very useful.

  17. Factors controlling black carbon distribution in the Arctic

    Science.gov (United States)

    Qi, Ling; Li, Qinbin; Li, Yinrui; He, Cenlin

    2017-01-01

    We investigate the sensitivity of black carbon (BC) in the Arctic, including BC concentration in snow (BCsnow, ng g-1) and surface air (BCair, ng m-3), as well as emissions, dry deposition, and wet scavenging using the global three-dimensional (3-D) chemical transport model (CTM) GEOS-Chem. We find that the model underestimates BCsnow in the Arctic by 40 % on average (median = 11.8 ng g-1). Natural gas flaring substantially increases total BC emissions in the Arctic (by ˜ 70 %). The flaring emissions lead to up to 49 % increases (0.1-8.5 ng g-1) in Arctic BCsnow, dramatically improving model comparison with observations (50 % reduction in discrepancy) near flaring source regions (the western side of the extreme north of Russia). Ample observations suggest that BC dry deposition velocities over snow and ice in current CTMs (0.03 cm s-1 in the GEOS-Chem) are too small. We apply the resistance-in-series method to compute a dry deposition velocity (vd) that varies with local meteorological and surface conditions. The resulting velocity is significantly larger and varies by a factor of 8 in the Arctic (0.03-0.24 cm s-1), which increases the fraction of dry to total BC deposition (16 to 25 %) yet leaves the total BC deposition and BCsnow in the Arctic unchanged. This is largely explained by the offsetting higher dry and lower wet deposition fluxes. Additionally, we account for the effect of the Wegener-Bergeron-Findeisen (WBF) process in mixed-phase clouds, which releases BC particles from condensed phases (water drops and ice crystals) back to the interstitial air and thereby substantially reduces the scavenging efficiency of clouds for BC (by 43-76 % in the Arctic). The resulting BCsnow is up to 80 % higher, BC loading is considerably larger (from 0.25 to 0.43 mg m-2), and BC lifetime is markedly prolonged (from 9 to 16 days) in the Arctic. Overall, flaring emissions increase BCair in the Arctic (by ˜ 20 ng m-3), the updated vd more than halves BCair (by ˜ 20 ng m-3

  18. Wielandt method applied to the diffusion equations discretized by finite element nodal methods

    International Nuclear Information System (INIS)

    Mugica R, A.; Valle G, E. del

    2003-01-01

    Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)

  19. What is the method in applying formal methods to PLC applications?

    NARCIS (Netherlands)

    Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.

    2000-01-01

    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system

  20. The Arctic Turn

    DEFF Research Database (Denmark)

    Rahbek-Clemmensen, Jon

    2018-01-01

    In October 2006, representatives of the Arctic governments met in Salekhard in northern Siberia for the biennial Arctic Council ministerial meeting to discuss how the council could combat regional climate change, among other issues. While most capitals were represented by their foreign minister......, a few states – Canada, Denmark, and the United States – sent other representatives. There was nothing unusual about the absence of Per Stig Møller, the Danish foreign minister – a Danish foreign minister had only once attended an Arctic Council ministerial meeting (Arctic Council 2016). Møller......’s nonappearance did, however, betray the low status that Arctic affairs had in the halls of government in Copenhagen. Since the end of the Cold War, where Greenland had helped tie Denmark and the US closer together due to its geostrategically important position between North America and the Soviet Union, Arctic...

  1. Collaboration across the Arctic

    DEFF Research Database (Denmark)

    Huppert, Verena Gisela; Chuffart, Romain François R.

    2017-01-01

    The Arctic is witnessing the rise of a new paradigm caused by an increase in pan-Arctic collaborations which co-exist with the region’s traditional linkages with the South. Using an analysis of concrete examples of regional collaborations in the Arctic today in the fields of education, health...... and infrastructure, this paper questions whether pan-Arctic collaborations in the Arctic are more viable than North-South collaborations, and explores the reasons behind and the foreseeable consequences of such collaborations. It shows that the newly emerging East-West paradigm operates at the same time...... as the traditional North-South paradigm, with no signs of the East-West paradigm being more viable in the foreseeable future. However, pan-Arctic collaboration, both due to pragmatic reasons and an increased awareness of similarities, is likely to increase in the future. The increased regionalization process...

  2. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  3. Impact of the Healthy Foods North nutrition intervention program on Inuit and Inuvialuit food consumption and preparation methods in Canadian Arctic communities.

    Science.gov (United States)

    Kolahdooz, Fariba; Pakseresht, Mohammadreza; Mead, Erin; Beck, Lindsay; Corriveau, André; Sharma, Sangita

    2014-07-04

    The 12-month Healthy Foods North intervention program was developed to improve diet among Inuit and Inuvialuit living in Arctic Canada and assess the impact of the intervention established for the communities. A quasi-experimental study randomly selected men and women (≥19 years of age) in six remote communities in Nunavut and the Northwest Territories. Validated quantitative food frequency and adult impact questionnaires were used. Four communities received the intervention and two communities served as delayed intervention controls. Pre- and post-intervention changes in frequency of/total intake of de-promoted food groups and healthiness of cooking methods were determined. The impact of the intervention was assessed using analysis of covariance (ANCOVA). Post-intervention data were analysed in the intervention (n = 221) and control (n = 111) communities, with participant retention rates of 91% for Nunavut and 83% for the Northwest Territories. There was a significant decrease in de-promoted foods, such as high fat meats (-27.9 g) and high fat dairy products (-19.8 g) among intervention communities (all p ≤ 0.05). The use of healthier preparation methods significantly increased (14.7%) in intervention communities relative to control communities. This study highlights the importance of using a community-based, multi-institutional nutrition intervention program to decrease the consumption of unhealthy foods and the use of unhealthy food preparation methods.

  4. Arctic wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Peltola, E. [Kemijoki Oy (Finland); Holttinen, H.; Marjaniemi, M. [VTT Energy, Espoo (Finland); Tammelin, B. [Finnish Meteorological Institute, Helsinki (Finland)

    1998-12-31

    Arctic wind energy research was aimed at adapting existing wind technologies to suit the arctic climatic conditions in Lapland. Project research work included meteorological measurements, instrument development, development of a blade heating system for wind turbines, load measurements and modelling of ice induced loads on wind turbines, together with the development of operation and maintenance practices in arctic conditions. As a result the basis now exists for technically feasible and economically viable wind energy production in Lapland. New and marketable products, such as blade heating systems for wind turbines and meteorological sensors for arctic conditions, with substantial export potential, have also been developed. (orig.)

  5. Arctic wind energy

    International Nuclear Information System (INIS)

    Peltola, E.; Holttinen, H.; Marjaniemi, M.; Tammelin, B.

    1998-01-01

    Arctic wind energy research was aimed at adapting existing wind technologies to suit the arctic climatic conditions in Lapland. Project research work included meteorological measurements, instrument development, development of a blade heating system for wind turbines, load measurements and modelling of ice induced loads on wind turbines, together with the development of operation and maintenance practices in arctic conditions. As a result the basis now exists for technically feasible and economically viable wind energy production in Lapland. New and marketable products, such as blade heating systems for wind turbines and meteorological sensors for arctic conditions, with substantial export potential, have also been developed. (orig.)

  6. A new clamp method for firing bricks | Obeng | Journal of Applied ...

    African Journals Online (AJOL)

    A new clamp method for firing bricks. ... Journal of Applied Science and Technology ... To overcome this operational deficiencies, a new method of firing bricks that uses brick clamp technique that incorporates a clamp wall of 60 cm thickness, a six tier approach of sealing the top of the clamp (by combination of green bricks) ...

  7. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  8. Determination methods for plutonium as applied in the field of reprocessing

    International Nuclear Information System (INIS)

    1983-07-01

    The papers presented report on Pu-determination methods, which are routinely applied in process control, and also on new developments which could supercede current methods either because they are more accurate or because they are simpler and faster. (orig./DG) [de

  9. Absolute Geostrophic Velocity Inverted from the Polar Science Center Hydrographic Climatology (PHC3.0) of the Arctic Ocean with the P-Vector Method (NCEI Accession 0156425)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset (called PHC-V) comprises 3D gridded climatological fields of absolute geostrophic velocity of the Arctic Ocean inverted from the Polar science center...

  10. Water Permeability of Pervious Concrete Is Dependent on the Applied Pressure and Testing Methods

    Directory of Open Access Journals (Sweden)

    Yinghong Qin

    2015-01-01

    Full Text Available Falling head method (FHM and constant head method (CHM are, respectively, used to test the water permeability of permeable concrete, using different water heads on the testing samples. The results indicate the apparent permeability of pervious concrete decreasing with the applied water head. The results also demonstrate the permeability measured from the FHM is lower than that from the CHM. The fundamental difference between the CHM and FHM is examined from the theory of fluid flowing through porous media. The testing results suggest that the water permeability of permeable concrete should be reported with the applied pressure and the associated testing method.

  11. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    Science.gov (United States)

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  12. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Javier Cubas

    2015-01-01

    Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  13. Diamond difference method with hybrid angular quadrature applied to neutron transport problems

    International Nuclear Information System (INIS)

    Zani, Jose H.; Barros, Ricardo C.; Alves Filho, Hermes

    2005-01-01

    In this work we presents the results for the calculations of the disadvantage factor in thermal nuclear reactor physics. We use the one-group discrete ordinates (S N ) equations to mathematically model the flux distributions in slab lattices. We apply the diamond difference method with source iteration iterative scheme to numerically solve the discretized systems equations. We used special interface conditions to describe the method with hybrid angular quadrature. We show numerical results to illustrate the accuracy of the hybrid method. (author)

  14. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    Science.gov (United States)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  15. Perspective for applying traditional and inovative teaching and learning methods to nurses continuing education

    OpenAIRE

    Bendinskaitė, Irmina

    2015-01-01

    Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...

  16. White Arctic vs. Blue Arctic: Making Choices

    Science.gov (United States)

    Pfirman, S. L.; Newton, R.; Schlosser, P.; Pomerance, R.; Tremblay, B.; Murray, M. S.; Gerrard, M.

    2015-12-01

    As the Arctic warms and shifts from icy white to watery blue and resource-rich, tension is arising between the desire to restore and sustain an ice-covered Arctic and stakeholder communities that hope to benefit from an open Arctic Ocean. If emissions of greenhouse gases to the atmosphere continue on their present trend, most of the summer sea ice cover is projected to be gone by mid-century, i.e., by the time that few if any interventions could be in place to restore it. There are many local as well as global reasons for ice restoration, including for example, preserving the Arctic's reflectivity, sustaining critical habitat, and maintaining cultural traditions. However, due to challenges in implementing interventions, it may take decades before summer sea ice would begin to return. This means that future generations would be faced with bringing sea ice back into regions where they have not experienced it before. While there is likely to be interest in taking action to restore ice for the local, regional, and global services it provides, there is also interest in the economic advancement that open access brings. Dealing with these emerging issues and new combinations of stakeholders needs new approaches - yet environmental change in the Arctic is proceeding quickly and will force the issues sooner rather than later. In this contribution we examine challenges, opportunities, and responsibilities related to exploring options for restoring Arctic sea ice and potential pathways for their implementation. Negotiating responses involves international strategic considerations including security and governance, meaning that along with local communities, state decision-makers, and commercial interests, national governments will have to play central roles. While these issues are currently playing out in the Arctic, similar tensions are also emerging in other regions.

  17. Cluster detection methods applied to the Upper Cape Cod cancer data

    Directory of Open Access Journals (Sweden)

    Ozonoff David

    2005-09-01

    Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  18. Apparatus and method for applying an end plug to a fuel rod tube end

    International Nuclear Information System (INIS)

    Rieben, S.L.; Wylie, M.E.

    1987-01-01

    An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described

  19. Method of levelized discounted costs applied in economic evaluation of nuclear power plant project

    International Nuclear Information System (INIS)

    Tian Li; Wang Yongqing; Liu Jingquan; Guo Jilin; Liu Wei

    2000-01-01

    The main methods of economic evaluation of bid which are in common use are introduced. The characteristics of levelized discounted cost method and its application are presented. The method of levelized discounted cost is applied to the cost calculation of a 200 MW nuclear heating reactor economic evaluation. The results indicate that the method of levelized discounted costs is simple, feasible and which is considered most suitable for the economic evaluation of various case. The method is suggested which is used in the national economic evaluation

  20. Local regression type methods applied to the study of geophysics and high frequency financial data

    Science.gov (United States)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  1. Using an Explicit Emission Tagging Method in Global Modeling of Source-Receptor Relationships for Black Carbon in the Arctic: Variations, Sources and Transport Pathways

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hailong; Rasch, Philip J.; Easter, Richard C.; Singh, Balwinder; Zhang, Rudong; Ma, Po-Lun; Qian, Yun; Ghan, Steven J.; Beagley, Nathaniel

    2014-11-27

    We introduce an explicit emission tagging technique in the Community Atmosphere Model to quantify source-region-resolved characteristics of black carbon (BC), focusing on the Arctic. Explicit tagging of BC source regions without perturbing the emissions makes it straightforward to establish source-receptor relationships and transport pathways, providing a physically consistent and computationally efficient approach to produce a detailed characterization of the destiny of regional BC emissions and the potential for mitigation actions. Our analysis shows that the contributions of major source regions to the global BC burden are not proportional to the respective emissions due to strong region-dependent removal rates and lifetimes, while the contributions to BC direct radiative forcing show a near-linear dependence on their respective contributions to the burden. Distant sources contribute to BC in remote regions mostly in the mid- and upper troposphere, having much less impact on lower-level concentrations (and deposition) than on burden. Arctic BC concentrations, deposition and source contributions all have strong seasonal variations. Eastern Asia contributes the most to the wintertime Arctic burden. Northern Europe emissions are more important to both surface concentration and deposition in winter than in summer. The largest contribution to Arctic BC in the summer is from Northern Asia. Although local emissions contribute less than 10% to the annual mean BC burden and deposition within the Arctic, the per-emission efficiency is much higher than for major non-Arctic sources. The interannual variability (1996-2005) due to meteorology is small in annual mean BC burden and radiative forcing but is significant in yearly seasonal means over the Arctic. When a slow aging treatment of BC is introduced, the increase of BC lifetime and burden is source-dependent. Global BC forcing-per-burden efficiency also increases primarily due to changes in BC vertical distributions. The

  2. Arctic bioremediation -- A case study

    International Nuclear Information System (INIS)

    Smallbeck, D.R.; Ramert, P.C.; Liddell, B.V.

    1994-01-01

    This paper discusses the use of bioremediation as an effective method to clean up diesel-range hydrocarbon spills in northern latitudes. The results of a laboratory study of microbial degradation of hydrocarbons under simulated arctic conditions showed that bioremediation can be effective in cold climates and led to the implementation of a large-scale field program. The results of 3 years of field testing have led to a significant reduction in diesel-range hydrocarbon concentrations in the contaminated area

  3. Arctic circulation regimes.

    Science.gov (United States)

    Proshutinsky, Andrey; Dukhovskoy, Dmitry; Timmermans, Mary-Louise; Krishfield, Richard; Bamber, Jonathan L

    2015-10-13

    Between 1948 and 1996, mean annual environmental parameters in the Arctic experienced a well-pronounced decadal variability with two basic circulation patterns: cyclonic and anticyclonic alternating at 5 to 7 year intervals. During cyclonic regimes, low sea-level atmospheric pressure (SLP) dominated over the Arctic Ocean driving sea ice and the upper ocean counterclockwise; the Arctic atmosphere was relatively warm and humid, and freshwater flux from the Arctic Ocean towards the subarctic seas was intensified. By contrast, during anticylonic circulation regimes, high SLP dominated driving sea ice and the upper ocean clockwise. Meanwhile, the atmosphere was cold and dry and the freshwater flux from the Arctic to the subarctic seas was reduced. Since 1997, however, the Arctic system has been under the influence of an anticyclonic circulation regime (17 years) with a set of environmental parameters that are atypical for this regime. We discuss a hypothesis explaining the causes and mechanisms regulating the intensity and duration of Arctic circulation regimes, and speculate how changes in freshwater fluxes from the Arctic Ocean and Greenland impact environmental conditions and interrupt their decadal variability. © 2015 The Authors.

  4. Arctic carbon cycling

    NARCIS (Netherlands)

    Christensen, Torben R; Rysgaard, SØREN; Bendtsen, JØRGEN; Else, Brent; Glud, Ronnie N; van Huissteden, J.; Parmentier, F.J.W.; Sachs, Torsten; Vonk, J.E.

    2017-01-01

    The marine Arctic is considered a net carbon sink, with large regional differences in uptake rates. More regional modelling and observational studies are required to reduce the uncertainty among current estimates. Robust projections for how the Arctic Ocean carbon sink may evolve in the future are

  5. Method to detect substances in a body and device to apply the method

    International Nuclear Information System (INIS)

    Voigt, H.

    1978-01-01

    The method and the measuring disposition serve to localize pellets doped with Gd 2 O 3 , lying between UO 2 pellets within a reactor fuel rod. The fuel rod is penetrating a homogeneous magnetic field generated between two pole shoes. The magnetic stray field caused by the doping substances is then measured by means of Hall probes (e.g. InAs) for quantitative discrimination from UO 2 . The position of the Gd 2 O 3 -doped pellets is determined by moving the fuel rod through the magnetic field in a direction perpendicular to the homogeneous field. The measuring signal is caused by the different susceptibility of Gd 2 O 3 with respect to UO 2 . (DG) [de

  6. Arctic Haze Analysis

    Science.gov (United States)

    Mei, Linlu; Xue, Yong

    2013-04-01

    The Arctic atmosphere is perturbed by nature/anthropogenic aerosol sources known as the Arctic haze, was firstly observed in 1956 by J. Murray Mitchell in Alaska (Mitchell, 1956). Pacyna and Shaw (1992) summarized that Arctic haze is a mixture of anthropogenic and natural pollutants from a variety of sources in different geographical areas at altitudes from 2 to 4 or 5 km while the source for layers of polluted air at altitudes below 2.5 km mainly comes from episodic transportation of anthropogenic sources situated closer to the Arctic. Arctic haze of low troposphere was found to be of a very strong seasonal variation characterized by a summer minimum and a winter maximum in Alaskan (Barrie, 1986; Shaw, 1995) and other Arctic region (Xie and Hopke, 1999). An anthropogenic factor dominated by together with metallic species like Pb, Zn, V, As, Sb, In, etc. and nature source such as sea salt factor consisting mainly of Cl, Na, and K (Xie and Hopke, 1999), dust containing Fe, Al and so on (Rahn et al.,1977). Black carbon and soot can also be included during summer time because of the mix of smoke from wildfires. The Arctic air mass is a unique meteorological feature of the troposphere characterized by sub-zero temperatures, little precipitation, stable stratification that prevents strong vertical mixing and low levels of solar radiations (Barrie, 1986), causing less pollutants was scavenged, the major revival pathway for particulates from the atmosphere in Arctic (Shaw, 1981, 1995; Heintzenberg and Larssen, 1983). Due to the special meteorological condition mentioned above, we can conclude that Eurasian is the main contributor of the Arctic pollutants and the strong transport into the Arctic from Eurasia during winter caused by the high pressure of the climatologically persistent Siberian high pressure region (Barrie, 1986). The paper intends to address the atmospheric characteristics of Arctic haze by comparing the clear day and haze day using different dataset

  7. Arctic Sea Level Reconstruction

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde

    Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...

  8. Research with Arctic peoples

    DEFF Research Database (Denmark)

    Smith, H Sally; Bjerregaard, Peter; Chan, Hing Man

    2006-01-01

    Arctic peoples are spread over eight countries and comprise 3.74 million residents, of whom 9% are indigenous. The Arctic countries include Canada, Finland, Greenland (Denmark), Iceland, Norway, Russia, Sweden and the United States. Although Arctic peoples are very diverse, there are a variety...... of environmental and health issues that are unique to the Arctic regions, and research exploring these issues offers significant opportunities, as well as challenges. On July 28-29, 2004, the National Heart, Lung, and Blood Institute and the Canadian Institutes of Health Research co-sponsored a working group...... entitled "Research with Arctic Peoples: Unique Research Opportunities in Heart, Lung, Blood and Sleep Disorders". The meeting was international in scope with investigators from Greenland, Iceland and Russia, as well as Canada and the United States. Multiple health agencies from Canada and the United States...

  9. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method

    International Nuclear Information System (INIS)

    Terra, Andre Miguel Barge Pontes Torres

    2005-01-01

    The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)

  10. Applying electric field to charged and polar particles between metallic plates: extension of the Ewald method.

    Science.gov (United States)

    Takae, Kyohei; Onuki, Akira

    2013-09-28

    We develop an efficient Ewald method of molecular dynamics simulation for calculating the electrostatic interactions among charged and polar particles between parallel metallic plates, where we may apply an electric field with an arbitrary size. We use the fact that the potential from the surface charges is equivalent to the sum of those from image charges and dipoles located outside the cell. We present simulation results on boundary effects of charged and polar fluids, formation of ionic crystals, and formation of dipole chains, where the applied field and the image interaction are crucial. For polar fluids, we find a large deviation of the classical Lorentz-field relation between the local field and the applied field due to pair correlations along the applied field. As general aspects, we clarify the difference between the potential-fixed and the charge-fixed boundary conditions and examine the relationship between the discrete particle description and the continuum electrostatics.

  11. Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research

    Science.gov (United States)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2014-05-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.

  12. Applying terminological methods and description logic for creating and implementing and ontology on inhibition

    DEFF Research Database (Denmark)

    Zambach, Sine; Madsen, Bodil Nistrup

    2009-01-01

    By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...

  13. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  14. Applied probabilistic methods in the field of reactor safety in Germany

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1982-01-01

    Some aspects of applied reliability and risk analysis methods in nuclear safety and the present role of both in Germany, are discussed. First, some comments on the status and applications of reliability analysis are given. Second, some conclusions that can be drawn from previous work on the German Risk Study are summarized. (orig.)

  15. 21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...

  16. Splendor and misery of the distorted wave method applied to heavy ions transfer reactions

    International Nuclear Information System (INIS)

    Mermaz, M.C.

    1979-01-01

    The success and failure of the Distorted Wave Method (DWM) applied to heavy ion transfer reactions are illustrated by few examples: one and multi-nucleon transfer reactions induced by 15 N and 18 O on 28 Si target nucleus performed on the vicinity of Coulomb barrier respectively at 44 and 56 MeV incident energy

  17. A nodal method applied to a diffusion problem with generalized coefficients

    International Nuclear Information System (INIS)

    Laazizi, A.; Guessous, N.

    1999-01-01

    In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)

  18. Trends in Research Methods in Applied Linguistics: China and the West.

    Science.gov (United States)

    Yihong, Gao; Lichun, Li; Jun, Lu

    2001-01-01

    Examines and compares current trends in applied linguistics (AL) research methods in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…

  19. Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)

    Science.gov (United States)

    Earl B. Anderson; R. Stanton Hales

    1986-01-01

    The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...

  20. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  1. The effect of misleading surface temperature estimations on the sensible heat fluxes at a high Arctic site – the Arctic Turbulence Experiment 2006 on Svalbard (ARCTEX-2006

    Directory of Open Access Journals (Sweden)

    J. Lüers

    2010-01-01

    Full Text Available The observed rapid climate warming in the Arctic requires improvements in permafrost and carbon cycle monitoring, accomplished by setting up long-term observation sites with high-quality in-situ measurements of turbulent heat, water and carbon fluxes as well as soil physical parameters in Arctic landscapes. But accurate quantification and well adapted parameterizations of turbulent fluxes in polar environments presents fundamental problems in soil-snow-ice-vegetation-atmosphere interaction studies. One of these problems is the accurate estimation of the surface or aerodynamic temperature T(0 required to force most of the bulk aerodynamic formulae currently used. Results from the Arctic-Turbulence-Experiment (ARCTEX-2006 performed on Svalbard during the winter/spring transition 2006 helped to better understand the physical exchange and transport processes of energy. The existence of an atypical temperature profile close to the surface in the Arctic spring at Svalbard could be proven to be one of the major issues hindering estimation of the appropriate surface temperature. Thus, it is essential to adjust the set-up of measurement systems carefully when applying flux-gradient methods that are commonly used to force atmosphere-ocean/land-ice models. The results of a comparison of different sensible heat-flux parameterizations with direct measurements indicate that the use of a hydrodynamic three-layer temperature-profile model achieves the best fit and reproduces the temporal variability of the surface temperature better than other approaches.

  2. A new effective Monte Carlo Midway coupling method in MCNP applied to a well logging problem

    Energy Technology Data Exchange (ETDEWEB)

    Serov, I.V.; John, T.M.; Hoogenboom, J.E

    1998-12-01

    The background of the Midway forward-adjoint coupling method including the black absorber technique for efficient Monte Carlo determination of radiation detector responses is described. The method is implemented in the general purpose MCNP Monte Carlo code. The utilization of the method is fairly straightforward and does not require any substantial extra expertise. The method was applied to a standard neutron well logging porosity tool problem. The results exhibit reliability and high efficiency of the Midway method. For the studied problem the efficiency gain is considerably higher than for a normal forward calculation, which is already strongly optimized by weight-windows. No additional effort is required to adjust the Midway model if the position of the detector or the porosity of the formation is changed. Additionally, the Midway method can be used with other variance reduction techniques if extra gain in efficiency is desired.

  3. Determination of activity of I-125 applying sum-peak methods

    International Nuclear Information System (INIS)

    Arbelo Penna, Y.; Hernandez Rivero, A.T.; Oropesa Verdecia, P.; Serra Aguila, R.; Moreno Leon, Y.

    2011-01-01

    The determination of activity of I-125 in radioactive solutions, applying sum-peak methods, by using an n-type HPGe detector of extended range is described. Two procedures were used for obtaining I-125 specific activity in solutions: a) an absolute method, which is independent of nuclear parameters and detector efficiency, and b) an option which consider constant the efficiency in the region of interest and involves calculations using nuclear parameters. The measurement geometries studied are specifically solid point sources. The relative deviations between specific activities, obtained by these different procedures are not higher than 1 %. Moreover, the activity of the radioactive solution was obtained by measuring it in NIST ampoule using a CAPINTEC CRC 35R dose calibrator. The consistency of obtained results, confirm the feasibility of applying direct methods of measurement for I-125 activity determinations, which allow us to achieve lower uncertainties in comparison with the relative methods of measurement. The establishment of these methods is aimed to be applied for the calibration of equipment and radionuclide dose calibrators used currently in clinical RIA/IRMA assays and Nuclear medicine practice respectively. (Author)

  4. An applied study using systems engineering methods to prioritize green systems options

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sonya M [Los Alamos National Laboratory; Macdonald, John M [Los Alamos National Laboratory

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.

  5. The nature of spatial transitions in the Arctic.

    Science.gov (United States)

    H. E. Epstein; J. Beringer; W. A. Gould; A. H. Lloyd; C. D. Thompson; F. S. Chapin III; G. J. Michaelson; C. L. Ping; T. S. Rupp; D. A. Walker

    2004-01-01

    Aim Describe the spatial and temporal properties of transitions in the Arctic and develop a conceptual understanding of the nature of these spatial transitions in the face of directional environmental change. Location Arctic tundra ecosystems of the North Slope of Alaska and the tundraforest region of the Seward Peninsula, Alaska. Methods We synthesize information from...

  6. Economic consequences assessment for scenarios and actual accidents do the same methods apply

    International Nuclear Information System (INIS)

    Brenot, J.

    1991-01-01

    Methods for estimating the economic consequences of major technological accidents, and their corresponding computer codes, are briefly presented with emphasis on the basic choices. When applied to hypothetic scenarios, those methods give results that are of interest for risk managers with a decision aiding perspective. Simultaneously the various costs, and the procedures for their estimation are reviewed for some actual accidents (Three Mile Island, Chernobyl,..). These costs are used in a perspective of litigation and compensation. The comparison of the methods used and cost estimates obtained for scenarios and actual accidents shows the points of convergence and discrepancies that are discussed

  7. Non-invasive imaging methods applied to neo- and paleontological cephalopod research

    Science.gov (United States)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2013-11-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.

  8. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  9. Power System Oscillation Modes Identifications: Guidelines for Applying TLS-ESPRIT Method

    Science.gov (United States)

    Gajjar, Gopal R.; Soman, Shreevardhan

    2013-05-01

    Fast measurements of power system quantities available through wide-area measurement systems enables direct observations for power system electromechanical oscillations. But the raw observations data need to be processed to obtain the quantitative measures required to make any inference regarding the power system state. A detailed discussion is presented for the theory behind the general problem of oscillatory mode indentification. This paper presents some results on oscillation mode identification applied to a wide-area frequency measurements system. Guidelines for selection of parametes for obtaining most reliable results from the applied method are provided. Finally, some results on real measurements are presented with our inference on them.

  10. Globalising the Arctic Climate:

    DEFF Research Database (Denmark)

    Corry, Olaf

    2017-01-01

    This chapter uses an object-oriented approach to explore how the Arctic is being constituted as an object of global governance within an emerging ‘global polity’, partly through geoengineering plans and political visions ('imaginaries'). It suggests that governance objects—the socially constructed...... on world politics. The emergence of the Arctic climate as a potential target of governance provides a case in point. The Arctic climate is becoming globalised, pushing it up the political agenda but drawing it away from its local and regional context....

  11. AROME-Arctic: New operational NWP model for the Arctic region

    Science.gov (United States)

    Süld, Jakob; Dale, Knut S.; Myrland, Espen; Batrak, Yurii; Homleid, Mariken; Valkonen, Teresa; Seierstad, Ivar A.; Randriamampianina, Roger

    2016-04-01

    substitute our actual operational Arctic mesoscale HIRLAM (High Resolution Limited Area Model) NWP model. This presentation will discuss in detail the operational implementation of the AROME-Arctic model together with post-processing methods. Aimed services in the Arctic region covered by the model, such as online weather forecasting (yr.no) and tracking of polar lows (barentswatch.no), is also included.

  12. Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)

    1996-12-31

    The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.

  13. Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction

    Science.gov (United States)

    Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan

    2009-01-01

    Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and

  14. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  15. Applied ecosystem analysis - a primer; the ecosystem diagnosis and treatment method

    International Nuclear Information System (INIS)

    Lestelle, L.C.; Mobrand, L.E.; Lichatowich, J.A.; Vogel, T.S.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual

  16. An Ultrasonic Guided Wave Method to Estimate Applied Biaxial Loads (Preprint)

    Science.gov (United States)

    2011-11-01

    VALIDATION A fatigue test was performed with an array of six surface-bonded PZT transducers on a 6061 aluminum plate as shown in Figure 4. The specimen...direct paths of propagation are oriented at different angles. This method is applied to experimental sparse array data recorded during a fatigue test...and the additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of

  17. Accuracy of the Adomian decomposition method applied to the Lorenz system

    International Nuclear Information System (INIS)

    Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.

    2006-01-01

    In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one

  18. Applying the Delphi method to assess impacts of forest management on biodiversity and habitat preservation

    DEFF Research Database (Denmark)

    Filyushkina, Anna; Strange, Niels; Löf, Magnus

    2018-01-01

    This study applied a structured expert elicitation technique, the Delphi method, to identify the impacts of five forest management alternatives and several forest characteristics on the preservation of biodiversity and habitats in the boreal zone of the Nordic countries. The panel of experts...... as a valuable addition to on-going empirical and modeling efforts. The findings could assist forest managers in developing forest management strategies that generate benefits from timber production while taking into account the trade-offs with biodiversity goals....

  19. Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation

    Directory of Open Access Journals (Sweden)

    Vitanov Nikolay K.

    2018-03-01

    Full Text Available We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.

  20. Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation

    Science.gov (United States)

    Vitanov, Nikolay K.; Dimitrova, Zlatinka I.

    2018-03-01

    We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.

  1. Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.

    Energy Technology Data Exchange (ETDEWEB)

    Lestelle, Lawrence C.; Mobrand, Lars E.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.

  2. The LTSN method used in transport equation, applied in nuclear engineering problems

    International Nuclear Information System (INIS)

    Borges, Volnei; Vilhena, Marco Tulio de

    2002-01-01

    The LTS N method solves analytically the S N equations, applying the Laplace transform in the spatial variable. This methodology is used in determination of scalar flux for neutrons and photons, absorbed dose rate, buildup factors and power for a heterogeneous planar slab. This procedure leads to the solution of a transcendental equations for effective multiplication, critical thickness and the atomic density. In this work numerical results are reported, considering multigroup problem in heterogeneous slab. (author)

  3. Arctic Mixed Layer Dynamics

    National Research Council Canada - National Science Library

    Morison, James

    2003-01-01

    .... Over the years we have sought to understand the heat and mass balance of the mixed layer, marginal ice zone processes, the Arctic internal wave and mixing environment, summer and winter leads, and convection...

  4. Arctic Aerosols and Sources

    DEFF Research Database (Denmark)

    Nielsen, Ingeborg Elbæk

    2017-01-01

    Since the Industrial Revolution, the anthropogenic emission of greenhouse gases has been increasing, leading to a rise in the global temperature. Particularly in the Arctic, climate change is having serious impact where the average temperature has increased almost twice as much as the global during......, ammonium, black carbon, and trace metals. This PhD dissertation studies Arctic aerosols and their sources, with special focus on black carbon, attempting to increase the knowledge about aerosols’ effect on the climate in an Arctic content. The first part of the dissertation examines the diversity...... of aerosol emissions from an important anthropogenic aerosol source: residential wood combustion. The second part, characterizes the chemical and physical composition of aerosols while investigating sources of aerosols in the Arctic. The main instrument used in this research has been the state...

  5. Live from the Arctic

    Science.gov (United States)

    Warnick, W. K.; Haines-Stiles, G.; Warburton, J.; Sunwood, K.

    2003-12-01

    For reasons of geography and geophysics, the poles of our planet, the Arctic and Antarctica, are places where climate change appears first: they are global canaries in the mine shaft. But while Antarctica (its penguins and ozone hole, for example) has been relatively well-documented in recent books, TV programs and journalism, the far North has received somewhat less attention. This project builds on and advances what has been done to date to share the people, places, and stories of the North with all Americans through multiple media, over several years. In a collaborative project between the Arctic Research Consortium of the United States (ARCUS) and PASSPORT TO KNOWLEDGE, Live from the Arctic will bring the Arctic environment to the public through a series of primetime broadcasts, live and taped programming, interactive virtual field trips, and webcasts. The five-year project will culminate during the 2007-2008 International Polar Year (IPY). Live from the Arctic will: A. Promote global understanding about the value and world -wide significance of the Arctic, B. Bring cutting-edge research to both non-formal and formal education communities, C. Provide opportunities for collaboration between arctic scientists, arctic communities, and the general public. Content will focus on the following four themes. 1. Pan-Arctic Changes and Impacts on Land (i.e. snow cover; permafrost; glaciers; hydrology; species composition, distribution, and abundance; subsistence harvesting) 2. Pan-Arctic Changes and Impacts in the Sea (i.e. salinity, temperature, currents, nutrients, sea ice, marine ecosystems (including people, marine mammals and fisheries) 3. Pan-Arctic Changes and Impacts in the Atmosphere (i.e. precipitation and evaporation; effects on humans and their communities) 4. Global Perspectives (i.e. effects on humans and communities, impacts to rest of the world) In The Earth is Faster Now, a recent collection of comments by members of indigenous arctic peoples, arctic

  6. Machine Learning Method Applied in Readout System of Superheated Droplet Detector

    Science.gov (United States)

    Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco

    2017-07-01

    Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.

  7. Translation Methods Applied in Translating Quotations in “the Secret” by Rhonda

    OpenAIRE

    FEBRIANTI, VICKY

    2014-01-01

    Keywords: Translation Methods, The Secret, Quotations.Translation helps human to get information written in any language evenwhen it is written in foreign languages. Therefore translation happens in printed media. Books have been popular printed media. The Secret written by Rhonda Byrne is a popular self-help book which has been translated into 50 languages including Indonesian (“The Secret”, n.d., para.5-6).This study is meant to find out the translation methods applied in The Secret. The wr...

  8. Development of a tracking method for augmented reality applied to nuclear plant maintenance work

    International Nuclear Information System (INIS)

    Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu

    2005-01-01

    In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance

  9. Applying the response matrix method for solving coupled neutron diffusion and transport problems

    International Nuclear Information System (INIS)

    Sibiya, G.S.

    1980-01-01

    The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de

  10. Method to integrate clinical guidelines into the electronic health record (EHR) by applying the archetypes approach.

    Science.gov (United States)

    Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro

    2013-01-01

    Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.

  11. Lessons learned applying CASE methods/tools to Ada software development projects

    Science.gov (United States)

    Blumberg, Maurice H.; Randall, Richard L.

    1993-01-01

    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.

  12. Application of fuzzy neural network technologies in management of transport and logistics processes in Arctic

    Science.gov (United States)

    Levchenko, N. G.; Glushkov, S. V.; Sobolevskaya, E. Yu; Orlov, A. P.

    2018-05-01

    The method of modeling the transport and logistics process using fuzzy neural network technologies has been considered. The analysis of the implemented fuzzy neural network model of the information management system of transnational multimodal transportation of the process showed the expediency of applying this method to the management of transport and logistics processes in the Arctic and Subarctic conditions. The modular architecture of this model can be expanded by incorporating additional modules, since the working conditions in the Arctic and the subarctic themselves will present more and more realistic tasks. The architecture allows increasing the information management system, without affecting the system or the method itself. The model has a wide range of application possibilities, including: analysis of the situation and behavior of interacting elements; dynamic monitoring and diagnostics of management processes; simulation of real events and processes; prediction and prevention of critical situations.

  13. Applying some methods to process the data coming from the nuclear reactions

    International Nuclear Information System (INIS)

    Suleymanov, M.K.; Abdinov, O.B.; Belashev, B.Z.

    2010-01-01

    Full text : The methods of a posterior increasing the resolution of the spectral lines are offered to process the data coming from the nuclear reactions. The methods have applied to process the data coming from the nuclear reactions at high energies. They give possibilities to get more detail information on a structure of the spectra of particles emitted in the nuclear reactions. The nuclear reactions are main source of the information on the structure and physics of the atomic nuclei. Usually the spectrums of the fragments of the reactions are complex ones. Apparently it is not simple to extract the necessary for investigation information. In the talk we discuss the methods of a posterior increasing the resolution of the spectral lines. The methods could be useful to process the complex data coming from the nuclear reactions. We consider the Fourier transformation method and maximum entropy one. The complex structures were identified by the method. One can see that at lest two selected points are indicated by the method. Recent we presented a talk where we shown that the results of the analyzing the structure of the pseudorapidity spectra of charged relativistic particles with ≥ 0.7 measured in Au+Em and Pb+Em at AGS and SPS energies using the Fourier transformation method and maximum entropy one. The dependences of these spectra on the number of fast target protons were studied. These distribution shown visually some plateau and shoulder that was at least three selected points on the distributions. The plateaus become wider in PbEm reactions. The existing of plateau is necessary for the parton models. The maximum entropy method could confirm the existing of the plateau and the shoulder on the distributions. The figure shows the results of applying the maximum entropy method. One can see that the method indicates several clean selected points. Some of them same with observed visually ones. We would like to note that the Fourier transformation method could not

  14. The Thermodynamic Structure of Arctic Coastal Fog Occurring During the Melt Season over East Greenland

    Science.gov (United States)

    Gilson, Gaëlle F.; Jiskoot, Hester; Cassano, John J.; Gultepe, Ismail; James, Timothy D.

    2018-05-01

    An automated method to classify Arctic fog into distinct thermodynamic profiles using historic in-situ surface and upper-air observations is presented. This classification is applied to low-resolution Integrated Global Radiosonde Archive (IGRA) soundings and high-resolution Arctic Summer Cloud Ocean Study (ASCOS) soundings in low- and high-Arctic coastal and pack-ice environments. Results allow investigation of fog macrophysical properties and processes in coastal East Greenland during melt seasons 1980-2012. Integrated with fog observations from three synoptic weather stations, 422 IGRA soundings are classified into six fog thermodynamic types based on surface saturation ratio, type of temperature inversion, fog-top height relative to inversion-base height and stability using the virtual potential temperature gradient. Between 65-80% of fog observations occur with a low-level inversion, and statically neutral or unstable surface layers occur frequently. Thermodynamic classification is sensitive to the assigned dew-point depression threshold, but categorization is robust. Despite differences in the vertical resolution of radiosonde observations, IGRA and ASCOS soundings yield the same six fog classes, with fog-class distribution varying with latitude and environmental conditions. High-Arctic fog frequently resides within an elevated inversion layer, whereas low-Arctic fog is more often restricted to the mixed layer. Using supplementary time-lapse images, ASCOS microwave radiometer retrievals and airmass back-trajectories, we hypothesize that the thermodynamic classes represent different stages of advection fog formation, development, and dissipation, including stratus-base lowering and fog lifting. This automated extraction of thermodynamic boundary-layer and inversion structure can be applied to radiosonde observations worldwide to better evaluate fog conditions that affect transportation and lead to improvements in numerical models.

  15. Intestinal colic in newborn babies: incidence and methods of proceeding applied by parents

    Directory of Open Access Journals (Sweden)

    Anna Lewandowska

    2017-06-01

    Full Text Available Introduction: Intestinal colic is one of the more frequent complaints that a general practitioner and paediatrician deal with in their work. 10-40% of babies formula fed and 10-20% breast fed are stricken by this complaint. A colic attack appears suddenly and very quickly causes energetic, squeaky cry or even scream. Colic attacks last for a few minutes and appear every 2-3 hours usually in the evenings. Specialist literature provides numerous definitions of intestinal colic. The concept was introduced for the first time to paediatric textbooks over 250 years ago. One of the most accurate definitions describe colic as recurring attacks of intensive cry and anxiety lasting for more than 3 hours a day, 3 days a week within 3 weeks. Care of a baby suffering from an intestinal colic causes numerous problems and anxiety among parents, therefore knowledge of effective methods to combat this complaint is a challenge for contemporary neonatology and paediatrics. The aim of the study is to estimate the incidence of intestinal colic in newborn babies formula and breast fed as well as to assess methods of proceeding applied by parents and analyze their effectiveness. Material and methods: The research involved 100 newborn babies breast fed and 100 formula fed, and their parents. The research method applied in the study was a diagnostic survey conducted by use of a questionnaire method. Results: Among examined newborn babies that were breast fed, 43% have experienced intestinal colic, while among those formula fed 30% have suffered from it. The study involved 44% new born female babies and 56% male babies. 52% of mothers were 30-34 years old, 30% 35-59 years old, and 17% 25-59 years old. When it comes to families, the most numerous was a group in good financial situation (60%. The second numerous group was that in average financial situation (40%. All the respondents claimed that they had the knowledge on intestinal colic and the main source of knowledge

  16. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  17. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  18. Power secant method applied to natural frequency extraction of Timoshenko beam structures

    Directory of Open Access Journals (Sweden)

    C.A.N. Dias

    Full Text Available This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named "power deflation", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.

  19. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    International Nuclear Information System (INIS)

    Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana

    2015-01-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  20. A methodological framework applied to the choice of the best method in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2009-01-01

    The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)

  1. Study on vibration characteristics and fault diagnosis method of oil-immersed flat wave reactor in Arctic area converter station

    Science.gov (United States)

    Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang

    2017-10-01

    Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.

  2. Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de

    2003-01-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  3. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  4. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    Science.gov (United States)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  5. Relativistic convergent close-coupling method applied to electron scattering from mercury

    International Nuclear Information System (INIS)

    Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor

    2010-01-01

    We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.

  6. A reflective lens: applying critical systems thinking and visual methods to ecohealth research.

    Science.gov (United States)

    Cleland, Deborah; Wyborn, Carina

    2010-12-01

    Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.

  7. Contemporary Arctic Sea Level

    Science.gov (United States)

    Cazenave, A. A.

    2017-12-01

    During recent decades, the Arctic region has warmed at a rate about twice the rest of the globe. Sea ice melting is increasing and the Greenland ice sheet is losing mass at an accelerated rate. Arctic warming, decrease in the sea ice cover and fresh water input to the Arctic ocean may eventually impact the Arctic sea level. In this presentation, we review our current knowledge of contemporary Arctic sea level changes. Until the beginning of the 1990s, Arctic sea level variations were essentially deduced from tide gauges located along the Russian and Norwegian coastlines. Since then, high inclination satellite altimetry missions have allowed measuring sea level over a large portion of the Arctic Ocean (up to 80 degree north). Measuring sea level in the Arctic by satellite altimetry is challenging because the presence of sea ice cover limits the full capacity of this technique. However adapted processing of raw altimetric measurements significantly increases the number of valid data, hence the data coverage, from which regional sea level variations can be extracted. Over the altimetry era, positive trend patterns are observed over the Beaufort Gyre and along the east coast of Greenland, while negative trends are reported along the Siberian shelf. On average over the Arctic region covered by satellite altimetry, the rate of sea level rise since 1992 is slightly less than the global mea sea level rate (of about 3 mm per year). On the other hand, the interannual variability is quite significant. Space gravimetry data from the GRACE mission and ocean reanalyses provide information on the mass and steric contributions to sea level, hence on the sea level budget. Budget studies show that regional sea level trends over the Beaufort Gyre and along the eastern coast of Greenland, are essentially due to salinity changes. However, in terms of regional average, the net steric component contributes little to the observed sea level trend. The sea level budget in the Arctic

  8. Numerical consideration for multiscale statistical process control method applied to nuclear material accountancy

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Hori, Masato; Asou, Ryoji; Usuda, Shigekazu

    2006-01-01

    The multiscale statistical process control (MSSPC) method is applied to clarify the elements of material unaccounted for (MUF) in large scale reprocessing plants using numerical calculations. Continuous wavelet functions are used to decompose the process data, which simulate batch operation superimposed by various types of disturbance, and the disturbance components included in the data are divided into time and frequency spaces. The diagnosis of MSSPC is applied to distinguish abnormal events from the process data and shows how to detect abrupt and protracted diversions using principle component analysis. Quantitative performance of MSSPC for the time series data is shown with average run lengths given by Monte-Carlo simulation to compare to the non-detection probability β. Recent discussion about bias corrections in material balances is introduced and another approach is presented to evaluate MUF without assuming the measurement error model. (author)

  9. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia

  10. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    Science.gov (United States)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  11. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.

  12. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied

  13. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    Science.gov (United States)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  14. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  15. Resonating group method as applied to the spectroscopy of α-transfer reactions

    Science.gov (United States)

    Subbotin, V. B.; Semjonov, V. M.; Gridnev, K. A.; Hefter, E. F.

    1983-10-01

    In the conventional approach to α-transfer reactions the finite- and/or zero-range distorted-wave Born approximation is used in liaison with a macroscopic description of the captured α particle in the residual nucleus. Here the specific example of 16O(6Li,d)20Ne reactions at different projectile energies is taken to present a microscopic resonating group method analysis of the α particle in the final nucleus (for the reaction part the simple zero-range distorted-wave Born approximation is employed). In the discussion of suitable nucleon-nucleon interactions, force number one of the effective interactions presented by Volkov is shown to be most appropriate for the system considered. Application of the continuous analog of Newton's method to the evaluation of the resonating group method equations yields an increased accuracy with respect to traditional methods. The resonating group method description induces only minor changes in the structures of the angular distributions, but it does serve its purpose in yielding reliable and consistent spectroscopic information. NUCLEAR STRUCTURE 16O(6Li,d)20Ne; E=20 to 32 MeV; calculated B(E2); reduced widths, dσdΩ extracted α-spectroscopic factors. ZRDWBA with microscope RGM description of residual α particle in 20Ne; application of continuous analog of Newton's method; tested and applied Volkov force No. 1; direct mechanism.

  16. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2016-02-01

    Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  17. The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws

    Institute of Scientific and Technical Information of China (English)

    DonghaiLI; XuezhiJIANG; 等

    1997-01-01

    The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.

  18. Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources

    Directory of Open Access Journals (Sweden)

    Alireza Borhani Dariane

    2009-01-01

    Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least  not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.

  19. Boundary element methods applied to two-dimensional neutron diffusion problems

    International Nuclear Information System (INIS)

    Itagaki, Masafumi

    1985-01-01

    The Boundary element method (BEM) has been applied to two-dimensional neutron diffusion problems. The boundary integral equation and its discretized form have been derived. Some numerical techniques have been developed, which can be applied to critical and fixed-source problems including multi-region ones. Two types of test programs have been developed according to whether the 'zero-determinant search' or the 'source iteration' technique is adopted for criticality search. Both programs require only the fluxes and currents on boundaries as the unknown variables. The former allows a reduction in computing time and memory in comparison with the finite element method (FEM). The latter is not always efficient in terms of computing time due to the domain integral related to the inhomogeneous source term; however, this domain integral can be replaced by the equivalent boundary integral for a region with a non-multiplying medium or with a uniform source, resulting in a significant reduction in computing time. The BEM, as well as the FEM, is well suited for solving irregular geometrical problems for which the finite difference method (FDM) is unsuited. The BEM also solves problems with infinite domains, which cannot be solved by the ordinary FEM and FDM. Some simple test calculations are made to compare the BEM with the FEM and FDM, and discussions are made concerning the relative merits of the BEM and problems requiring future solution. (author)

  20. Methodical basis of training of cadets for the military applied heptathlon competitions

    Directory of Open Access Journals (Sweden)

    R.V. Anatskyi

    2017-12-01

    Full Text Available The purpose of the research is to develop methodical bases of training of cadets for the military applied heptathlon competitions. Material and methods: Cadets of 2-3 courses at the age of 19-20 years (n=20 participated in researches. Cadets were selected by the best results of exercises performing included into the program of military applied heptathlon competitions (100 m run, 50 m freestyle swimming, Kalashnikov rifle shooting, pull-up, obstacle course, grenade throwing, 3000 m run. Preparation took place on the basis of training center. All trainings were organized and carried out according to the methodical basics: in a week preparation microcycle five days cadets had two trainings a day (on Saturday was one training, on Sunday they had rest. The selected exercises with individual loads were performed, Results : Sport scores demonstrated top results in the performance of 100 m run, 3000 m run and pull-up. The indices of performing exercise "obstacle course" were much lower than expected. Rather low results were demonstrated in swimming and shooting. Conclusions . Results of researches indicate the necessity of quality improvement: cadets’ weapons proficiency; physical readiness to perform the exercises requiring complex demonstration of all physical qualities.

  1. Arctic Rabies – A Review

    Directory of Open Access Journals (Sweden)

    Prestrud Pål

    2004-03-01

    Full Text Available Rabies seems to persist throughout most arctic regions, and the northern parts of Norway, Sweden and Finland, is the only part of the Arctic where rabies has not been diagnosed in recent time. The arctic fox is the main host, and the same arctic virus variant seems to infect the arctic fox throughout the range of this species. The epidemiology of rabies seems to have certain common characteristics in arctic regions, but main questions such as the maintenance and spread of the disease remains largely unknown. The virus has spread and initiated new epidemics also in other species such as the red fox and the racoon dog. Large land areas and cold climate complicate the control of the disease, but experimental oral vaccination of arctic foxes has been successful. This article summarises the current knowledge and the typical characteristics of arctic rabies including its distribution and epidemiology.

  2. Nutrient Runoff Losses from Liquid Dairy Manure Applied with Low-Disturbance Methods.

    Science.gov (United States)

    Jokela, William; Sherman, Jessica; Cavadini, Jason

    2016-09-01

    Manure applied to cropland is a source of phosphorus (P) and nitrogen (N) in surface runoff and can contribute to impairment of surface waters. Tillage immediately after application incorporates manure into the soil, which may reduce nutrient loss in runoff as well as N loss via NH volatilization. However, tillage also incorporates crop residue, which reduces surface cover and may increase erosion potential. We applied liquid dairy manure in a silage corn ( L.)-cereal rye ( L.) cover crop system in late October using methods designed to incorporate manure with minimal soil and residue disturbance. These include strip-till injection and tine aerator-band manure application, which were compared with standard broadcast application, either incorporated with a disk or left on the surface. Runoff was generated with a portable rainfall simulator (42 mm h for 30 min) three separate times: (i) 2 to 5 d after the October manure application, (ii) in early spring, and (iii) after tillage and planting. In the postmanure application runoff, the highest losses of total P and dissolved reactive P were from surface-applied manure. Dissolved P loss was reduced 98% by strip-till injection; this result was not statistically different from the no-manure control. Reductions from the aerator band method and disk incorporation were 53 and 80%, respectively. Total P losses followed a similar pattern, with 87% reduction from injected manure. Runoff losses of N had generally similar patterns to those of P. Losses of P and N were, in most cases, lower in the spring rain simulations with fewer significant treatment effects. Overall, results show that low-disturbance manure application methods can significantly reduce nutrient runoff losses compared with surface application while maintaining residue cover better than incorporation by tillage. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. Collaborative Research: Improving Decadal Prediction of Arctic Climate Variability and Change Using a Regional Arctic

    Energy Technology Data Exchange (ETDEWEB)

    Gutowski, William J. [Iowa State Univ., Ames, IA (United States)

    2017-12-28

    This project developed and applied a regional Arctic System model for enhanced decadal predictions. It built on successful research by four of the current PIs with support from the DOE Climate Change Prediction Program, which has resulted in the development of a fully coupled Regional Arctic Climate Model (RACM) consisting of atmosphere, land-hydrology, ocean and sea ice components. An expanded RACM, a Regional Arctic System Model (RASM), has been set up to include ice sheets, ice caps, mountain glaciers, and dynamic vegetation to allow investigation of coupled physical processes responsible for decadal-scale climate change and variability in the Arctic. RASM can have high spatial resolution (~4-20 times higher than currently practical in global models) to advance modeling of critical processes and determine the need for their explicit representation in Global Earth System Models (GESMs). The pan-Arctic region is a key indicator of the state of global climate through polar amplification. However, a system-level understanding of critical arctic processes and feedbacks needs further development. Rapid climate change has occurred in a number of Arctic System components during the past few decades, including retreat of the perennial sea ice cover, increased surface melting of the Greenland ice sheet, acceleration and thinning of outlet glaciers, reduced snow cover, thawing permafrost, and shifts in vegetation. Such changes could have significant ramifications for global sea level, the ocean thermohaline circulation and heat budget, ecosystems, native communities, natural resource exploration, and commercial transportation. The overarching goal of the RASM project has been to advance understanding of past and present states of arctic climate and to improve seasonal to decadal predictions. To do this the project has focused on variability and long-term change of energy and freshwater flows through the arctic climate system. The three foci of this research are: - Changes

  4. Electrochemical noise measurements techniques and the reversing dc potential drop method applied to stress corrosion essays

    International Nuclear Information System (INIS)

    Aly, Omar Fernandes; Andrade, Arnaldo Paes de; MattarNeto, Miguel; Aoki, Idalina Vieira

    2002-01-01

    This paper aims to collect information and to discuss the electrochemical noise measurements and the reversing dc potential drop method, applied to stress corrosion essays that can be used to evaluate the nucleation and the increase of stress corrosion cracking in Alloy 600 and/or Alloy 182 specimens from Angra I Nuclear Power Plant. Therefore we will pretend to establish a standard procedure to essays to be realized on the new autoclave equipment on the Laboratorio de Eletroquimica e Corrosao do Departamento de Engenharia Quimica da Escola Politecnica da Universidade de Sao Paulo - Electrochemical and Corrosion Laboratory of the Chemical Engineering Department of Polytechnical School of Sao Paulo University, Brazil. (author)

  5. Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction

    Directory of Open Access Journals (Sweden)

    Heng Luo,

    2011-01-01

    Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.

  6. A semiempirical method of applying the dechanneling correction in the extraction of disorder distribution

    International Nuclear Information System (INIS)

    Walker, R.S.; Thompson, D.A.; Poehlman, S.W.

    1977-01-01

    The application of single, plural or multiple scattering theories to the determination of defect dechanneling in channeling-backscattering disorder measurements is re-examined. A semiempirical modification to the method is described that results in making the extracted disorder and disorder distribution relatively insensitive to the scattering model employed. The various models and modifications have been applied to the 1 to 2 MeV He + channeling-backscatter data obtained from 20 to 80 keV H + to Ne + bombarded Si, GaP and GaAs at 50 K and 300 K. (author)

  7. Zoltàn Dörnyei, Research Methods in Applied Linguistics

    OpenAIRE

    Marie-Françoise Narcy-Combes

    2012-01-01

    Research Methods in Applied Linguistics est un ouvrage pratique et accessible qui s’adresse en priorité au chercheur débutant et au doctorant en linguistique appliquée et en didactique des langues pour lesquels il représente un accompagnement fort utile. Son style clair et son organisation sans surprise en font une lecture facile et agréable et rendent les différents concepts aisément compréhensibles pour tous. Il présente un bilan de la méthodologie de la recherche en linguistique appliquée,...

  8. Cork-resin ablative insulation for complex surfaces and method for applying the same

    Science.gov (United States)

    Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)

    1980-01-01

    A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.

  9. Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems

    International Nuclear Information System (INIS)

    Andrade Lima, F.R. de

    1993-01-01

    The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)

  10. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    Science.gov (United States)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  11. MAIA - Method for Architecture of Information Applied: methodological construct of information processing in complex contexts

    Directory of Open Access Journals (Sweden)

    Ismael de Moura Costa

    2017-04-01

    Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.

  12. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    Science.gov (United States)

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  13. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  14. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  15. Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method

    International Nuclear Information System (INIS)

    Sohrabi, M.; Soltani, Z.

    2016-01-01

    Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6  tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6  alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method

  16. Can we constrain postglacial sedimentation in the western Arctic Ocean by ramped pyrolysis 14C? A case study from the Chukchi-Alaskan margin.

    Science.gov (United States)

    Suzuki, K.; Yamamoto, M.; Rosenheim, B. E.; Omori, T.; Polyak, L.; Nam, S. I.

    2017-12-01

    The Arctic Ocean underwent dramatic climate changes in the past. Variations in sea-ice extent and ocean current system in the Arctic cause changes in surface albedo and deep water formation, which have global climatic implications. However, Arctic paleoceanographic studies are lagging behind the other oceans due largely to chronostratigraphic difficulties. One of the reasons for this is a scant presence of material suitable for 14C dating in large areas of the Arctic seafloor. To enable improved age constraints for sediments impoverished in datable material, we apply ramped pyrolysis 14C method (Ramped PyrOx 14C, Rosenheim et al., 2008) to sedimentary records from the Chukchi-Alaska margin recovering Holocene to late-glacial deposits. Samples were divided into five fraction products by gradual heating sedimentary organic carbon from ambient laboratory temperature to 1000°C. The thermographs show a trimodal pattern of organic matter decomposition over temperature, and we consider that CO2 generated at the lowest temperature range was derived from autochthonous organic carbon contemporaneous with sediment deposition, similar to studies in the Antarctic margin and elsewhere. For verification of results, some of the samples treated for ramped pyrolysis 14C were taken from intervals dated earlier by AMS 14C using bivalve mollusks. Ultimately, our results allow a new appraisal of deglacial to Holocene deposition at the Chukchi-Alaska margin with potential to be applied to other regions of the Arctic Ocean.

  17. A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED

    DEFF Research Database (Denmark)

    2017-01-01

    The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....

  18. Simplified inelastic analysis methods applied to fast breeder reactor core design

    International Nuclear Information System (INIS)

    Abo-El-Ata, M.M.

    1978-01-01

    The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented

  19. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  20. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  1. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    Energy Technology Data Exchange (ETDEWEB)

    Missirlian, M. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France)], E-mail: marc.missirlian@cea.fr; Traxler, H. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria); Boscary, J. [Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Boltzmannstr. 2, D-85748 Garching (Germany); Durocher, A.; Escourbiac, F.; Schlosser, J. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France); Schedler, B.; Schuler, P. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria)

    2007-10-15

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application.

  2. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Traxler, H.; Boscary, J.; Durocher, A.; Escourbiac, F.; Schlosser, J.; Schedler, B.; Schuler, P.

    2007-01-01

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application

  3. The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation

    Science.gov (United States)

    Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.

    1992-05-01

    Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.

  4. Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations

    Science.gov (United States)

    Atkins, Harold L.

    2009-01-01

    The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.

  5. Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods

    Directory of Open Access Journals (Sweden)

    Heide Lukosch

    2018-03-01

    Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation.  The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.

  6. An input feature selection method applied to fuzzy neural networks for signal esitmation

    International Nuclear Information System (INIS)

    Na, Man Gyun; Sim, Young Rok

    2001-01-01

    It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods

  7. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    Science.gov (United States)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  8. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    Science.gov (United States)

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  9. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    Science.gov (United States)

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  10. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    Directory of Open Access Journals (Sweden)

    Erin O Sills

    Full Text Available Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012. This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and

  11. Arctic security and Norway

    Energy Technology Data Exchange (ETDEWEB)

    Tamnes, Rolf

    2013-03-01

    Global warming is one of the most serious threats facing mankind. Many regions and countries will be affected, and there will be many losers. The earliest and most intense climatic changes are being experienced in the Arctic region. Arctic average temperature has risen at twice the rate of the global average in the past half century. These changes provide an early indication for the world of the environmental and societal significance of global warming. For that reason, the Arctic presents itself as an important scientific laboratory for improving our understanding of the causes and patterns of climate changes. The rapidly rising temperature threatens the Arctic ecosystem, but the human consequences seem to be far less dramatic there than in many other places in the world. According to the U.S. National Intelligence Council, Russia has the potential to gain the most from increasingly temperate weather, because its petroleum reserves become more accessible and because the opening of an Arctic waterway could provide economic and commercial advantages. Norway might also be fortunate. Some years ago, the Financial Times asked: #Left Double Quotation Mark#What should Norway do about the fact that global warming will make their climate more hospitable and enhance their financial situation, even as it inflicts damage on other parts of the world?#Right Double Quotation Mark#(Author)

  12. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  13. Labile soil phosphorus as influenced by methods of applying radioactive phosphorus

    International Nuclear Information System (INIS)

    Selvaratnam, V.V.; Andersen, A.J.; Thomsen, J.D.; Gissel-Nielsen, G.

    1980-03-01

    The influence of different methods of applying radioactive phosphorus on the E- and L-values was studied in four foil types using barley, buckwheat, and rye grass for the L-value determination. The four soils differed greatly in their E- and L-values. The experiment was carried out both with and without carrier-P. The presence of carrier-P had no influence on the E-values, while carrier-P in some cases gave a lower L-value. Both E- and L-values dependent on the method of application. When the 32 P was applied on a small soil or sand sample and dried before mixing with the total amount of soil, the E-values were higher than at direct application most likely because of a stronger fixation to the soil/sand particles. This was not the case for the L-values that are based on a much longer equilibrium time. On the contrary, the direct application of the 32 p-solution to the whole amount of soil gave higher L-values of a non-homogeneous distribution of the 32 p in the soil. (author)

  14. Analysis of coupled neutron-gamma radiations, applied to shieldings in multigroup albedo method

    International Nuclear Information System (INIS)

    Dunley, Leonardo Souza

    2002-01-01

    The principal mathematical tools frequently available for calculations in Nuclear Engineering, including coupled neutron-gamma radiations shielding problems, involve the full Transport Theory or the Monte Carlo techniques. The Multigroup Albedo Method applied to shieldings is characterized by following the radiations through distinct layers of materials, allowing the determination of the neutron and gamma fractions reflected from, transmitted through and absorbed in the irradiated media when a neutronic stream hits the first layer of material, independently of flux calculations. Then, the method is a complementary tool of great didactic value due to its clarity and simplicity in solving neutron and/or gamma shielding problems. The outstanding results achieved in previous works motivated the elaboration and the development of this study that is presented in this dissertation. The radiation balance resulting from the incidence of a neutronic stream into a shielding composed by 'm' non-multiplying slab layers for neutrons was determined by the Albedo method, considering 'n' energy groups for neutrons and 'g' energy groups for gammas. It was taken into account there is no upscattering of neutrons and gammas. However, it was considered that neutrons from any energy groups are able to produce gammas of all energy groups. The ANISN code, for an angular quadrature order S 2 , was used as a standard for comparison of the results obtained by the Albedo method. So, it was necessary to choose an identical system configuration, both for ANISN and Albedo methods. This configuration was six neutron energy groups and eight gamma energy groups, using three slab layers (iron aluminum - manganese). The excellent results expressed in comparative tables show great agreement between the values determined by the deterministic code adopted as standard and, the values determined by the computational program created using the Albedo method and the algorithm developed for coupled neutron

  15. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.

    2017-12-05

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  16. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2017-01-01

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds' features may improve model's performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  17. Beyond Thin Ice: Co-Communicating the Many Arctics

    Science.gov (United States)

    Druckenmiller, M. L.; Francis, J. A.; Huntington, H.

    2015-12-01

    Science communication, typically defined as informing non-expert communities of societally relevant science, is persuaded by the magnitude and pace of scientific discoveries, as well as the urgency of societal issues wherein science may inform decisions. Perhaps nowhere is the connection between these facets stronger than in the marine and coastal Arctic where environmental change is driving advancements in our understanding of natural and socio-ecological systems while paving the way for a new assortment of arctic stakeholders, who generally lack adequate operational knowledge. As such, the Arctic provides opportunity to advance the role of science communication into a collaborative process of engagement and co-communication. To date, the communication of arctic change falls within four primary genres, each with particular audiences in mind. The New Arctic communicates an arctic of new stakeholders scampering to take advantage of unprecedented access. The Global Arctic conveys the Arctic's importance to the rest of the world, primarily as a regulator of lower-latitude climate and weather. The Intra-connected Arctic emphasizes the increasing awareness of the interplay between system components, such as between sea ice loss and marine food webs. The Transforming Arctic communicates the region's trajectory relative to the historical Arctic, acknowledging the impacts on indigenous peoples. The broad societal consensus on climate change in the Arctic as compared to other regions in the world underscores the opportunity for co-communication. Seizing this opportunity requires the science community's engagement with stakeholders and indigenous peoples to construct environmental change narratives that are meaningful to climate responses relative to non-ecological priorities (e.g., infrastructure, food availability, employment, or language). Co-communication fosters opportunities for new methods of and audiences for communication, the co-production of new interdisciplinary

  18. Conflict Resolution Practices of Arctic Aboriginal Peoples

    NARCIS (Netherlands)

    Gendron, R.; Hille, C.

    2013-01-01

    This article presents an overview of the conflict resolution practices of indigenous populations in the Arctic. Among the aboriginal groups discussed are the Inuit, the Aleut, and the Saami. Having presented the conflict resolution methods, the authors discuss the types of conflicts that are

  19. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  20. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  1. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    Directory of Open Access Journals (Sweden)

    Laura Susana Vargas-Valencia

    2016-12-01

    Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  2. The Cn method applied to problems with an anisotropic diffusion law

    International Nuclear Information System (INIS)

    Grandjean, P.M.

    A 2-dimensional Cn calculation has been applied to homogeneous media subjected to the Rayleigh impact law. Results obtained with collision probabilities and Chandrasekhar calculations are compared to those from Cn method. Introducing in the expression of the transport equation, an expansion truncated on a polynomial basis for the outgoing angular flux (or possibly entrance flux) gives two Cn systems of algebraic linear equations for the expansion coefficients. The matrix elements of these equations are the moments of the Green function in infinite medium. The search for the Green function is effected through the Fourier transformation of the integrodifferential equation and its moments are derived from their Fourier transforms through a numerical integration in the complex plane. The method has been used for calculating the albedo in semi-infinite media, the extrapolation length of the Milne problem, and the albedo and transmission factor of a slab (a concise study of convergence is presented). A system of integro-differential equations bearing on the moments of the angular flux inside the medium has been derived, for the collision probability method. It is numerically solved with approximately the bulk flux by step functions. The albedo in semi-infinite medium has also been computed through the semi-analytical Chandrasekhar method. In the latter, the outgoing flux is expressed as a function of the entrance flux by means of a integral whose kernel is numerically derived [fr

  3. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  4. The Arctic as a test case for an assessment of climate impacts on national security.

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Mark A.; Zak, Bernard Daniel; Backus, George A.; Ivey, Mark D.; Boslough, Mark Bruce Elrick

    2008-11-01

    The Arctic region is rapidly changing in a way that will affect the rest of the world. Parts of Alaska, western Canada, and Siberia are currently warming at twice the global rate. This warming trend is accelerating permafrost deterioration, coastal erosion, snow and ice loss, and other changes that are a direct consequence of climate change. Climatologists have long understood that changes in the Arctic would be faster and more intense than elsewhere on the planet, but the degree and speed of the changes were underestimated compared to recent observations. Policy makers have not yet had time to examine the latest evidence or appreciate the nature of the consequences. Thus, the abruptness and severity of an unfolding Arctic climate crisis has not been incorporated into long-range planning. The purpose of this report is to briefly review the physical basis for global climate change and Arctic amplification, summarize the ongoing observations, discuss the potential consequences, explain the need for an objective risk assessment, develop scenarios for future change, review existing modeling capabilities and the need for better regional models, and finally to make recommendations for Sandia's future role in preparing our leaders to deal with impacts of Arctic climate change on national security. Accurate and credible regional-scale climate models are still several years in the future, and those models are essential for estimating climate impacts around the globe. This study demonstrates how a scenario-based method may be used to give insights into climate impacts on a regional scale and possible mitigation. Because of our experience in the Arctic and widespread recognition of the Arctic's importance in the Earth climate system we chose the Arctic as a test case for an assessment of climate impacts on national security. Sandia can make a swift and significant contribution by applying modeling and simulation tools with internal collaborations as well as with

  5. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  6. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  7. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  8. Applying of whole-tree harvesting method; Kokopuujuontomenetelmaen soveltaminen aines- ja energiapuun hankintaan

    Energy Technology Data Exchange (ETDEWEB)

    Vesisenaho, T [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S [VTT Manufacturing Technology, Espoo (Finland)

    1997-12-01

    The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed

  9. Brucellosis Prevention Program: Applying “Child to Family Health Education” Method

    Directory of Open Access Journals (Sweden)

    H. Allahverdipour

    2010-04-01

    Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.

  10. Evaluation of cleaning methods applied in home environments after renovation and remodeling activities

    International Nuclear Information System (INIS)

    Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.

    2004-01-01

    We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that

  11. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  12. Arctic species resilience

    DEFF Research Database (Denmark)

    Mortensen, Lars O.; Forchhammer, Mads C.; Jeppesen, Erik

    The peak of biological activities in Arctic ecosystems is characterized by a relative short and intense period between the start of snowmelt until the onset of frost. Recent climate changes have induced larger seasonal variation in both timing of snowmelt as well as changes mean temperatures......, an extensive monitoring program has been conducted in the North Eastern Greenland National Park, the Zackenberg Basic. The objective of the program is to provide long time series of data on the natural innate oscillations and plasticity of a High Arctic ecosystem. With offset in the data provided through...

  13. Postgraduate Education in Quality Improvement Methods: Initial Results of the Fellows' Applied Quality Training (FAQT) Curriculum.

    Science.gov (United States)

    Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp

    2016-06-01

    Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.

  14. Applying system engineering methods to site characterization research for nuclear waste repositories

    International Nuclear Information System (INIS)

    Woods, T.W.

    1985-01-01

    Nuclear research and engineering projects can benefit from the use of system engineering methods. This paper is brief overview illustrating how system engineering methods could be applied in structuring a site characterization effort for a candidate nuclear waste repository. System engineering is simply an orderly process that has been widely used to transform a recognized need into a fully defined system. Such a system may be physical or abstract, natural or man-made, hardware or procedural, as is appropriate to the system's need or objective. It is a way of mentally visualizing all the constituent elements and their relationships necessary to fulfill a need, and doing so compliant with all constraining requirements attendant to that need. Such a system approach provides completeness, order, clarity, and direction. Admittedly, system engineering can be burdensome and inappropriate for those project objectives having simple and familiar solutions that are easily held and controlled mentally. However, some type of documented and structured approach is needed for those objectives that dictate extensive, unique, or complex programs, and/or creation of state-of-the-art machines and facilities. System engineering methods have been used extensively and successfully in these cases. The scientific methods has served well in ordering countless technical undertakings that address a specific question. Similarly, conventional construction and engineering job methods will continue to be quite adequate to organize routine building projects. Nuclear waste repository site characterization projects involve multiple complex research questions and regulatory requirements that interface with each other and with advanced engineering and subsurface construction techniques. There is little doubt that system engineering is an appropriate orchestrating process to structure such diverse elements into a cohesive, well defied project

  15. A Precise Method for Cloth Configuration Parsing Applied to Single-Arm Flattening

    Directory of Open Access Journals (Sweden)

    Li Sun

    2016-04-01

    Full Text Available In this paper, we investigate the contribution that visual perception affords to a robotic manipulation task in which a crumpled garment is flattened by eliminating visually detected wrinkles. In order to explore and validate visually guided clothing manipulation in a repeatable and controlled environment, we have developed a hand-eye interactive virtual robot manipulation system that incorporates a clothing simulator to close the effector-garment-visual sensing interaction loop. We present the technical details and compare the performance of two different methods for detecting, representing and interpreting wrinkles within clothing surfaces captured in high-resolution depth maps. The first method we present relies upon a clustering-based method for localizing and parametrizing wrinkles, while the second method adopts a more advanced geometry-based approach in which shape-topology analysis underpins the identification of the cloth configuration (i.e., maps wrinkles. Having interpreted the state of the cloth configuration by means of either of these methods, a heuristic-based flattening strategy is then executed to infer the appropriate forces, their directions and gripper contact locations that must be applied to the cloth in order to flatten the perceived wrinkles. A greedy approach, which attempts to flatten the largest detected wrinkle for each perception-iteration cycle, has been successfully adopted in this work. We present the results of our heuristic-based flattening methodology which relies upon clustering-based and geometry-based features respectively. Our experiments indicate that geometry-based features have the potential to provide a greater degree of clothing configuration understanding and, as a consequence, improve flattening performance. The results of experiments using a real robot (as opposed to simulated robot also confirm our proposition that a more effective visual perception system can advance the performance of cloth

  16. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  17. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Directory of Open Access Journals (Sweden)

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  18. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  19. IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  20. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    Science.gov (United States)

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.

  1. IAEA-ASSET`s root cause analysis method applied to sodium leakage incident at Monju

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Norio; Hirano, Masashi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  2. Arctic Terrestrial Biodiversity Monitoring Plan

    DEFF Research Database (Denmark)

    Christensen, Tom; Payne, J.; Doyle, M.

    The Conservation of Arctic Flora and Fauna (CAFF), the biodiversity working group of the Arctic Council, established the Circumpolar Biodiversity Monitoring Program (CBMP) to address the need for coordinated and standardized monitoring of Arctic environments. The CBMP includes an international...... on developing and implementing long-term plans for monitoring the integrity of Arctic biomes: terrestrial, marine, freshwater, and coastal (under development) environments. The CBMP Terrestrial Expert Monitoring Group (CBMP-TEMG) has developed the Arctic Terrestrial Biodiversity Monitoring Plan (CBMP......-Terrestrial Plan/the Plan) as the framework for coordinated, long-term Arctic terrestrial biodiversity monitoring. The goal of the CBMP-Terrestrial Plan is to improve the collective ability of Arctic traditional knowledge (TK) holders, northern communities, and scientists to detect, understand and report on long...

  3. Human-induced Arctic moistening.

    Science.gov (United States)

    Min, Seung-Ki; Zhang, Xuebin; Zwiers, Francis

    2008-04-25

    The Arctic and northern subpolar regions are critical for climate change. Ice-albedo feedback amplifies warming in the Arctic, and fluctuations of regional fresh water inflow to the Arctic Ocean modulate the deep ocean circulation and thus exert a strong global influence. By comparing observations to simulations from 22 coupled climate models, we find influence from anthropogenic greenhouse gases and sulfate aerosols in the space-time pattern of precipitation change over high-latitude land areas north of 55 degrees N during the second half of the 20th century. The human-induced Arctic moistening is consistent with observed increases in Arctic river discharge and freshening of Arctic water masses. This result provides new evidence that human activity has contributed to Arctic hydrological change.

  4. Application of Visible/near Infrared derivative spectroscopy to Arctic paleoceanography

    Science.gov (United States)

    Ortiz, Joseph D.

    2011-05-01

    The lack of well-preserved carbonate in much of the Arctic marine environment dictates the need for alternative methods of paleoceanographic reconstruction. The broad variety of physical properties measurements makes them well suited for use in a variety of environments, but they provide unique opportunities when employed in the Arctic. Because Arctic sediment is introduced and reworked by a variety of mechanisms, the signature from multiple processes becomes intermixed with the sediment. Many of these processes operate in other ocean basins, while some function only in Polar Regions. A strategy to address this mixing problem is to employ spectrally-resolved physical properties measurements, or to use multiple methods in conjunction to generate multivariate data sets, which can differentiate concurrent processes. Data of this type is well suited to multivariate analysis techniques such as sample-based or variable-based, varimax-rotated, principle component analysis (VPCA). These are methods that decompose the data matrix to infer process from orthogonal functions. The method is applied to cores from the Chukchi sea to document that visible derivative spectroscopy provides a powerful means of reconstructing sediment provenance. In the Chukchi Sea, diffuse spectral reflectance provides a proxy to monitor variations in Holocene flow through the Bering Strait.

  5. Application of Visible/near Infrared derivative spectroscopy to Arctic paleoceanography

    International Nuclear Information System (INIS)

    Ortiz, Joseph D

    2011-01-01

    The lack of well-preserved carbonate in much of the Arctic marine environment dictates the need for alternative methods of paleoceanographic reconstruction. The broad variety of physical properties measurements makes them well suited for use in a variety of environments, but they provide unique opportunities when employed in the Arctic. Because Arctic sediment is introduced and reworked by a variety of mechanisms, the signature from multiple processes becomes intermixed with the sediment. Many of these processes operate in other ocean basins, while some function only in Polar Regions. A strategy to address this mixing problem is to employ spectrally-resolved physical properties measurements, or to use multiple methods in conjunction to generate multivariate data sets, which can differentiate concurrent processes. Data of this type is well suited to multivariate analysis techniques such as sample-based or variable-based, varimax-rotated, principle component analysis (VPCA). These are methods that decompose the data matrix to infer process from orthogonal functions. The method is applied to cores from the Chukchi sea to document that visible derivative spectroscopy provides a powerful means of reconstructing sediment provenance. In the Chukchi Sea, diffuse spectral reflectance provides a proxy to monitor variations in Holocene flow through the Bering Strait.

  6. [Influence of Sex and Age on Contrast Sensitivity Subject to the Applied Method].

    Science.gov (United States)

    Darius, Sabine; Bergmann, Lisa; Blaschke, Saskia; Böckelmann, Irina

    2018-02-01

    The aim of the study was to detect gender and age differences in both photopic and mesopic contrast sensitivity with different methods in relation to German driver's license regulations (Fahrerlaubnisverordnung; FeV). We examined 134 healthy volunteers (53 men, 81 women) with an age between 18 and 76 years, that had been divided into two groups (AG I Mars charts under standardized illumination were applied for photopic contrast sensitivity. We could not find any gender differences. When evaluating age, there were no differences between the two groups for the Mars charts nor in the Rodatest. In all other tests, the younger volunteers achieved significantly better results. For contrast vision, there exists age-adapted cut-off-values. Concerning the driving safety of traffic participants, sufficient photopic and mesopic contrast vision should be focused on, independent of age. Therefore, there is a need to reconsider the age-adapted cut-off-values. Georg Thieme Verlag KG Stuttgart · New York.

  7. Study of different ultrasonic focusing methods applied to non destructive testing

    International Nuclear Information System (INIS)

    El Amrani, M.

    1995-01-01

    The work presented in this thesis concerns the study of different ultrasonic focusing techniques applied to Nondestructive Testing (mechanical focusing and electronic focusing) and compares their capabilities. We have developed a model to predict the ultrasonic field radiated into a solid by water-coupled transducers. The model is based upon the Rayleigh integral formulation, modified to take account the refraction at the liquid-solid interface. The model has been validated by numerous experiments in various configurations. Running this model and the associated software, we have developed new methods to optimize focused transducers and studied the characteristics of the beam generated by transducers using various focusing techniques. (author). 120 refs., 95 figs., 4 appends

  8. Numerical method of applying shadow theory to all regions of multilayered dielectric gratings in conical mounting.

    Science.gov (United States)

    Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro

    2016-11-01

    Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. Applying the shadow theory to the matrix eigenvalues method, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.

  9. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    M. Macků

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.

  10. Simulation by the method of inverse cumulative distribution function applied in optimising of foundry plant production

    Directory of Open Access Journals (Sweden)

    J. Szymszal

    2009-01-01

    Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.

  11. Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system

    Science.gov (United States)

    Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew

    2016-05-01

    Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.

  12. Adding randomness controlling parameters in GRASP method applied in school timetabling problem

    Directory of Open Access Journals (Sweden)

    Renato Santos Pereira

    2017-09-01

    Full Text Available This paper studies the influence of randomness controlling parameters (RCP in first stage GRASP method applied in graph coloring problem, specifically school timetabling problems in a public high school. The algorithm (with the inclusion of RCP was based on critical variables identified through focus groups, whose weights can be adjusted by the user in order to meet the institutional needs. The results of the computational experiment, with 11-year-old data (66 observations processed at the same high school show that the inclusion of RCP leads to significantly lowering the distance between initial solutions and local minima. The acceptance and the use of the solutions found allow us to conclude that the modified GRASP, as has been constructed, can make a positive contribution to this timetabling problem of the school in question.

  13. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  14. Applied methods for mitigation of damage by stress corrosion in BWR type reactors

    International Nuclear Information System (INIS)

    Hernandez C, R.; Diaz S, A.; Gachuz M, M.; Arganis J, C.

    1998-01-01

    The Boiling Water nuclear Reactors (BWR) have presented stress corrosion problems, mainly in components and pipes of the primary system, provoking negative impacts in the performance of energy generator plants, as well as the increasing in the radiation exposure to personnel involucred. This problem has caused development of research programs, which are guided to find solution alternatives for the phenomena control. Among results of greater relevance the control for the reactor water chemistry stands out particularly in the impurities concentration and oxidation of radiolysis products; as well as the supervision in the materials selection and the stresses levels reduction. The present work presents the methods which can be applied to diminish the problems of stress corrosion in BWR reactors. (Author)

  15. An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring

    Science.gov (United States)

    Buratynski, E. K.; Caughey, D. A.

    1984-01-01

    An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.

  16. Structural characterization of complex systems by applying a combination of scattering and spectroscopic methods

    International Nuclear Information System (INIS)

    Klose, G.

    1999-01-01

    Lyotropic mesophases possess lattice dimensions of the order of magnitude of the length of their molecules. Consequently, the first Bragg reflections of such systems appear at small scattering angles (small angle scattering). A combination of scattering and NMR methods was applied to study structural properties of POPC/C 12 E n mixtures. Generally, the ranges of existence of the liquid crystalline lamellar phase, the dimension of the unit-cell of the lamellae and important structural parameters of the lipid and surfactant molecules in the mixed bilayers were determined. With that the POPC/C 12 E 4 bilayer represents one of the best structurally characterized mixed model membranes. It is a good starting system for studying the interrelation with other e.g. dynamic or thermodynamic properties. (K.A.)

  17. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    Macků M.

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to the production of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out what dimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mould production, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself. Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically. A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meet requirements specified by a customer were the results.

  18. Continental Margins of the Arctic Ocean: Implications for Law of the Sea

    Science.gov (United States)

    Mosher, David

    2016-04-01

    A coastal State must define the outer edge of its continental margin in order to be entitled to extend the outer limits of its continental shelf beyond 200 M, according to article 76 of the UN Convention on the Law of the Sea. The article prescribes the methods with which to make this definition and includes such metrics as water depth, seafloor gradient and thickness of sediment. Note the distinction between the "outer edge of the continental margin", which is the extent of the margin after application of the formula of article 76, and the "outer limit of the continental shelf", which is the limit after constraint criteria of article 76 are applied. For a relatively small ocean basin, the Arctic Ocean reveals a plethora of continental margin types reflecting both its complex tectonic origins and its diverse sedimentation history. These factors play important roles in determining the extended continental shelves of Arctic coastal States. This study highlights the critical factors that might determine the outer edge of continental margins in the Arctic Ocean as prescribed by article 76. Norway is the only Arctic coastal State that has had recommendations rendered by the Commission on the Limits of the Continental Shelf (CLCS). Russia and Denmark (Greenland) have made submissions to the CLCS to support their extended continental shelves in the Arctic and are awaiting recommendations. Canada has yet to make its submission and the US has not yet ratified the Convention. The various criteria that each coastal State has utilized or potentially can utilize to determine the outer edge of the continental margin are considered. Important criteria in the Arctic include, 1) morphological continuity of undersea features, such as the various ridges and spurs, with the landmass, 2) the tectonic origins and geologic affinities with the adjacent land masses of the margins and various ridges, 3) sedimentary processes, particularly along continental slopes, and 4) thickness and

  19. Particle generation methods applied in large-scale experiments on aerosol behaviour and source term studies

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.

    1997-01-01

    In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)

  20. Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites

    Science.gov (United States)

    Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.

    2017-12-01

    How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.

  1. Infrared thermography inspection methods applied to the target elements of W7-X Divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Durocher, A.; Schlosser, J.; Farjon, J.-L.; Vignal, N.; Traxler, H.; Schedler, B.; Boscary, J.

    2006-01-01

    As heat exhaust capability and lifetime of plasma-facing component (PFC) during in-situ operation are linked to the manufacturing quality, a set of non-destructive testing must be operated during R-and-D and manufacturing phases. Within this framework, advanced non-destructive examination (NDE) methods are one of the key issues to achieve a high level of quality and reliability of joining techniques in the production of high heat flux components but also to develop and built successfully PFCs for a next generation of fusion devices. In this frame, two NDE infrared thermographic approaches, which have been recently applied to the qualification of CFC target elements of the W7-X divertor during the first series production will be discussed in this paper. The first one, developed by CEA (SATIR facility) and used with successfully to the control of the mass-produced actively cooled PFCs on Tore Supra, is based on the transient thermography where the testing protocol consists in inducing a thermal transient within the heat sink structure by an alternative hot/cold water flow. The second one, recently developed by PLANSEE (ARGUS facility), is based on the pulsed thermography where the component is heated externally by a single powerful flash of light. Results obtained on qualification experiences performed during the first series production of W7-X divertor components representing about thirty mock-ups with artificial and manufacturing defects, demonstrated the capabilities of these two methods and raised the efficiency of inspection to a level which is appropriate for industrial application. This comparative study, associated to a cross-checking analysis between the high heat flux performance tests and these inspection methods by infrared thermography, showed a good reproducibility and allowed to set a detectable limit specific at each method. Finally, the detectability of relevant defects showed excellent coincidence with thermal images obtained from high heat flux

  2. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  3. Simulation methods to estimate design power: an overview for applied research.

    Science.gov (United States)

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  4. Building Resilience and Adaptation to Manage Arctic Change

    Energy Technology Data Exchange (ETDEWEB)

    Chapin, F. Stuart III [Univ. of Alaska, Fairbanks (United States). Inst. of Arctic Biology; Hoel, Michael [Oslo Univ. (Norway). Dept. of Economics; Carpenter, Steven R. [Wisconsin Univ., Madison, WI, (US). Center for Limnology] (and others)

    2006-06-15

    Unprecedented global changes caused by human actions challenge society's ability to sustain the desirable features of our planet. This requires proactive management of change to foster both resilience (sustaining those attributes that are important to society in the face of change) and adaptation (developing new socio- ecological configurations that function effectively under new conditions). The Arctic may be one of the last remaining opportunities to plan for change in a spatially extensive region where many of the ancestral ecological and social processes and feedbacks are still intact. If the feasibility of this strategy can be demonstrated in the Arctic, our improved understanding of the dynamics of change can be applied to regions with greater human modification. Conditions may now be ideal to implement policies to manage Arctic change because recent studies provide the essential scientific understanding, appropriate international institutions are in place, and Arctic nations have the wealth to institute necessary changes, if they choose to do so.

  5. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  6. Design and fabrication of facial prostheses for cancer patient applying computer aided method and manufacturing (CADCAM)

    Science.gov (United States)

    Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana

    2016-12-01

    Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.

  7. Applying the Weighted Horizontal Magnetic Gradient Method to a Simulated Flaring Active Region

    Science.gov (United States)

    Korsós, M. B.; Chatterjee, P.; Erdélyi, R.

    2018-04-01

    Here, we test the weighted horizontal magnetic gradient (WG M ) as a flare precursor, introduced by Korsós et al., by applying it to a magnetohydrodynamic (MHD) simulation of solar-like flares. The preflare evolution of the WG M and the behavior of the distance parameter between the area-weighted barycenters of opposite-polarity sunspots at various heights is investigated in the simulated δ-type sunspot. Four flares emanated from this sunspot. We found the optimum heights above the photosphere where the flare precursors of the WG M method are identifiable prior to each flare. These optimum heights agree reasonably well with the heights of the occurrence of flares identified from the analysis of their thermal and ohmic heating signatures in the simulation. We also estimated the expected time of the flare onsets from the duration of the approaching–receding motion of the barycenters of opposite polarities before each single flare. The estimated onset time and the actual time of occurrence of each flare are in good agreement at the corresponding optimum heights. This numerical experiment further supports the use of flare precursors based on the WG M method.

  8. Study on safety of crystallization method applied to dissolver solution in fast breeder reactor reprocessing

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Fujine, Yukio; Asakura, Toshihide; Murazaki, Minoru; Koyama, Tomozo; Sakakibara, Tetsuro; Shibata, Atsuhiro

    1999-03-01

    The crystallization method is proposed to apply for recovery of uranium from dissolution liquid, enabling to reduce handling materials in later stages of reprocessing used fast breeder reactor (FBR) fuels. This report studies possible safety problems accompanied by the proposed method. Crystallization process was first defined in the whole reprocessing process, and the quantity and the kind of treated fuel were specified. Possible problems, such as criticality, shielding, fire/explosion, and confinement, were then investigated; and the events that might induce accidental incidents were discussed. Criticality, above all the incidents, was further studied by considering exampled criticality control of the crystallization process. For crystallization equipment, in particular, evaluation models were set up in normal and accidental operation conditions. Related data were selected out from the nuclear criticality safety handbooks. The theoretical densities of plutonium nitrates, which give basic and important information, were estimated in this report based on the crystal structure data. The criticality limit of crystallization equipment was calculated based on the above information. (author)

  9. Method of moments as applied to arbitrarily shaped bounded nonlinear scatterers

    Science.gov (United States)

    Caorsi, Salvatore; Massa, Andrea; Pastorino, Matteo

    1994-01-01

    In this paper, we explore the possibility of applying the moment method to determine the electromagnetic field distributions inside three-dimensional bounded nonlinear dielectric objects of arbitrary shapes. The moment method has usually been employed to solve linear scattering problems. We start with an integral equation formulation, and derive a nonlinear system of algebraic equations that allows us to obtain an approximate solution for the harmonic vector components of the electric field. Preliminary results of some numerical simulations are reported. Dans cet article nous explorons la possibilité d'appliquer la méthode des moments pour déterminer la distribution du champ électromagnétique dans des objets tridimensionnels diélectriques, non-linéaires, limités et de formes arbitraires. La méthode des moments a été communément employée pour les problèmes de diffusion linéaire. Nous commençons par une formulation basée sur l'équation intégrale et nous dérivons un système non-linéaire d'équations algébriques qui nous permet d'obtenir une solution approximative pour les composantes harmoniques du vecteur du champ électrique. Les résultats préliminaires de quelques simulations numériques sont présentés.

  10. [An experimental assessment of methods for applying intestinal sutures in intestinal obstruction].

    Science.gov (United States)

    Akhmadudinov, M G

    1992-04-01

    The results of various methods used in applying intestinal sutures in obturation were studied. Three series of experiments were conducted on 30 dogs--resection of the intestine after obstruction with the formation of anastomoses by means of double-row suture (Albert--Shmiden--Lambert) in the first series (10 dogs), by a single-row suture after V. M. Mateshchuk [correction of Mateshuku] in the second series, and bu a single-row stretching suture suggested by the author in the third series. The postoperative complications and the parameters of physical airtightness of the intestinal anastomosis were studied in dynamics in the experimental animals. The results of the study: incompetence of the anastomosis sutures in the first series 6, in the second 4, and in the third series one. Adhesions occurred in all animals of the first and second series and in 2 of the third series. Six dogs of the first series died, 4 of the second, and one of the third. Study of the dynamics of the results showed a direct connection of the complications with the parameters of the physical airtightness of the anastomosis, and the last-named with the method of the intestinal suture. Relatively better results were noted in formation of the anastomosis by means of our suggested stretshing continuous suture passed through the serous, muscular, and submucous coats of the intestine.

  11. Arctic Islands LNG

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, W.

    1977-01-01

    Trans-Canada Pipe Lines Ltd. made a feasibility study of transporting LNG from the High Arctic Islands to a St. Lawrence River Terminal by means of a specially designed and built 125,000 cu m or 165,000 cu m icebreaking LNG tanker. Studies were made of the climatology and of ice conditions, using available statistical data as well as direct surveys in 1974, 1975, and 1976. For on-schedule and unimpeded (unescorted) passage of the LNG carriers at all times of the year, special navigation and communications systems can be made available. Available icebreaking experience, charting for the proposed tanker routes, and tide tables for the Canadian Arctic were surveyed. Preliminary design of a proposed Arctic LNG icebreaker tanker, including containment system, reliquefaction of boiloff, speed, power, number of trips for 345 day/yr operation, and liquefaction and regasification facilities are discussed. The use of a minimum of three Arctic Class 10 ships would enable delivery of volumes of natural gas averaging 11.3 million cu m/day over a period of a year to Canadian markets. The concept appears to be technically feasible with existing basic technology.

  12. Disparities in Arctic Health

    Centers for Disease Control (CDC) Podcasts

    Life at the top of the globe is drastically different. Harsh climate devoid of sunlight part of the year, pockets of extreme poverty, and lack of physical infrastructure interfere with healthcare and public health services. Learn about the challenges of people in the Arctic and how research and the International Polar Year address them.

  13. The Arctic Circle

    Science.gov (United States)

    McDonald, Siobhan

    2016-04-01

    My name is Siobhan McDonald. I am a visual artist living and working in Dublin. My studio is based in The School of Science at University College Dublin where I was Artist in Residence 2013-2015. A fascination with time and the changeable nature of landmass has led to ongoing conversations with scientists and research institutions across the interweaving disciplines of botany, biology and geology. I am developing a body of work following a recent research trip to the North Pole where I studied the disappearing landscape of the Arctic. Prompted by my experience of the Arctic shelf receding, this new work addresses issues of the instability of the earth's materiality. The work is grounded in an investigation of material processes, exploring the dynamic forces that transform matter and energy. This project combines art and science in a fascinating exploration of one of the Earth's last relatively untouched wilderness areas - the High Arctic to bring audiences on journeys to both real and artistically re-imagined Arctic spaces. CRYSTALLINE'S pivotal process is collaboration: with The European Space Agency; curator Helen Carey; palaeontologist Prof. Jenny McElwain, UCD; and with composer Irene Buckley. CRYSTALLINE explores our desire to make corporeal contact with geological phenomena in Polar Regions. From January 2016, in my collaboration with Jenny McElwain, I will focus on the study of plants and atmospheres from the Arctic regions as far back as 400 million years ago, to explore the essential 'nature' that, invisible to the eye, acts as imaginary portholes into other times. This work will be informed by my arctic tracings of sounds and images recorded in the glaciers of this disappearing frozen landscape. In doing so, the urgencies around the tipping of natural balances in this fragile region will be revealed. The final work will emerge from my forthcoming residency at the ESA in spring 2016. Here I will conduct a series of workshops in ESA Madrid to work with

  14. Chemometric methods and near-infrared spectroscopy applied to bioenergy production

    International Nuclear Information System (INIS)

    Liebmann, B.

    2010-01-01

    data analysis (i) successfully determine the concentrations of moisture, protein, and starch in the feedstock material as well as glucose, ethanol, glycerol, lactic acid, acetic acid in the processed bioethanol broths; (ii) and allow quantifying a complex biofuel's property such as the heating value. At the third stage, this thesis focuses on new chemometric methods that improve mathematical analysis of multivariate data such as NIR spectra. The newly developed method 'repeated double cross validation' (rdCV) separates optimization of regression models from tests of model performance; furthermore, rdCV estimates the variability of the model performance based on a large number of prediction errors from test samples. The rdCV procedure has been applied to both the classical PLS regression and the robust 'partial robust M' regression method, which can handle erroneous data. The peculiar and relatively unknown 'random projection' method is tested for its potential of dimensionality reduction of data from chemometrics and chemoinformatics. The main findings are: (i) rdCV fosters a realistic assessment of model performance, (ii) robust regression has outstanding performance for data containing outliers and thus is strongly recommendable, and (iii) random projection is a useful niche application for high-dimensional data combined with possible restrictions in data storage and computing time. The three chemometric methods described are available as functions for the free software R. (author) [de

  15. Tsunami in the Arctic

    Science.gov (United States)

    Kulikov, Evgueni; Medvedev, Igor; Ivaschenko, Alexey

    2017-04-01

    The severity of the climate and sparsely populated coastal regions are the reason why the Russian part of the Arctic Ocean belongs to the least studied areas of the World Ocean. In the same time intensive economic development of the Arctic region, specifically oil and gas industry, require studies of potential thread natural disasters that can cause environmental and technical damage of the coastal and maritime infrastructure of energy industry complex (FEC). Despite the fact that the seismic activity in the Arctic can be attributed to a moderate level, we cannot exclude the occurrence of destructive tsunami waves, directly threatening the FEC. According to the IAEA requirements, in the construction of nuclear power plants it is necessary to take into account the impact of all natural disasters with frequency more than 10-5 per year. Planned accommodation in the polar regions of the Russian floating nuclear power plants certainly requires an adequate risk assessment of the tsunami hazard in the areas of their location. Develop the concept of tsunami hazard assessment would be based on the numerical simulation of different scenarios in which reproduced the hypothetical seismic sources and generated tsunamis. The analysis of available geological, geophysical and seismological data for the period of instrumental observations (1918-2015) shows that the highest earthquake potential within the Arctic region is associated with the underwater Mid-Arctic zone of ocean bottom spreading (interplate boundary between Eurasia and North American plates) as well as with some areas of continental slope within the marginal seas. For the Arctic coast of Russia and the adjacent shelf area, the greatest tsunami danger of seismotectonic origin comes from the earthquakes occurring in the underwater Gakkel Ridge zone, the north-eastern part of the Mid-Arctic zone. In this area, one may expect earthquakes of magnitude Mw ˜ 6.5-7.0 at a rate of 10-2 per year and of magnitude Mw ˜ 7.5 at a

  16. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  17. Arctic indigenous peoples as representations and representatives of climate change.

    Science.gov (United States)

    Martello, Marybeth Long

    2008-06-01

    Recent scientific findings, as presented in the Arctic Climate Impact Assessment (ACIA), indicate that climate change in the Arctic is happening now, at a faster rate than elsewhere in the world, and with major implications for peoples of the Arctic (especially indigenous peoples) and the rest of the planet. This paper examines scientific and political representations of Arctic indigenous peoples that have been central to the production and articulation of these claims. ACIA employs novel forms and strategies of representation that reflect changing conceptual models and practices of global change science and depict indigenous peoples as expert, exotic, and at-risk. These portrayals emerge alongside the growing political activism of Arctic indigenous peoples who present themselves as representatives or embodiments of climate change itself as they advocate for climate change mitigation policies. These mutually constitutive forms of representation suggest that scientific ways of seeing the global environment shape and are shaped by the public image and voice of global citizens. Likewise, the authority, credibility, and visibility of Arctic indigenous activists derive, in part, from their status as at-risk experts, a status buttressed by new scientific frameworks and methods that recognize and rely on the local experiences and knowledges of indigenous peoples. Analyses of these relationships linking scientific and political representations of Arctic climate change build upon science and technology studies (STS) scholarship on visualization, challenge conventional notions of globalization, and raise questions about power and accountability in global climate change research.

  18. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  19. A mixed methods evaluation of team-based learning for applied pathophysiology in undergraduate nursing education.

    Science.gov (United States)

    Branney, Jonathan; Priego-Hernández, Jacqueline

    2018-02-01

    It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those

  20. Vulnerability and adaptation to climate change in the arctic (VACCA): Implementing recommendations

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report provides recommendations for how Norway's government could move forward with the results from the Arctic Council supported VACCA project, suggesting how concrete activities may be implemented and applied to policy and practice. Based on the results of interviews with Arctic peoples and people involved in Arctic work, combined with desk studies of relevant literature, four Arctic contexts are defined within the dividing lines coastal/non-coastal and urban/non-urban. This report provides up to five concrete recommendations within each context, recommendations for cross-contextual action, and specific projects for further research and action.(auth)

  1. Microbial diversity in oiled and un-oiled shoreline sediments in the Norwegian Arctic

    International Nuclear Information System (INIS)

    Grossman, M.J.; Prince, R.C.; Garrett, R.M.; Garrett, K.K.; Bare, R.E.; O'Neil, K.R.; Sowlay, M.R.; Hinton, S.M.; Lee, K.; Sergy, G.A.; Guenette, C.C.

    2000-01-01

    Field trials were conducted at an oiled shoreline on the island of Spitsbergen to examine the effect of nutrient addition on the metabolic status, potential for aromatic hydrocarbon degradation, and the phylogenetic diversity of the microbial community in oiled Arctic shoreline sediments. IF-30 intermediate fuel grade oil was applied to the shoreline which was then divided into four plots. One was left untreated and two were tilled. Four applications of fertilizer were applied over a two-month period. Phospholipid fatty acid (PLFA), gene probe and 16S microbial community analysis suggested that bioremediation stimulated the metabolic activity, increased microbial biomass and genetic potential for aromatic hydrocarbon degradation, and increased the population of hydrocarbon degradation of an oiled Arctic shoreline microbial community. The results of this study are in agreement with the results from stimulation of oil biodegradation in temperate marine environments. It was concluded that biodegradation and fertilizer addition are feasible treatment methods for oil spills in Arctic regions. 31 refs., 3 tabs., 3 figs

  2. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    Science.gov (United States)

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Genomics of Arctic cod

    Science.gov (United States)

    Wilson, Robert E.; Sage, George K.; Sonsthagen, Sarah A.; Gravley, Megan C.; Menning, Damian; Talbot, Sandra L.

    2017-01-01

    The Arctic cod (Boreogadus saida) is an abundant marine fish that plays a vital role in the marine food web. To better understand the population genetic structure and the role of natural selection acting on the maternally-inherited mitochondrial genome (mitogenome), a molecule often associated with adaptations to temperature, we analyzed genetic data collected from 11 biparentally-inherited nuclear microsatellite DNA loci and nucleotide sequence data from from the mitochondrial DNA (mtDNA) cytochrome b (cytb) gene and, for a subset of individuals, the entire mitogenome. In addition, due to potential of species misidentification with morphologically similar Polar cod (Arctogadus glacialis), we used ddRAD-Seq data to determine the level of divergence between species and identify species-specific markers. Based on the findings presented here, Arctic cod across the Pacific Arctic (Bering, Chukchi, and Beaufort Seas) comprise a single panmictic population with high genetic diversity compared to other gadids. High genetic diversity was indicated across all 13 protein-coding genes in the mitogenome. In addition, we found moderate levels of genetic diversity in the nuclear microsatellite loci, with highest diversity found in the Chukchi Sea. Our analyses of markers from both marker classes (nuclear microsatellite fragment data and mtDNA cytb sequence data) failed to uncover a signal of microgeographic genetic structure within Arctic cod across the three regions, within the Alaskan Beaufort Sea, or between near-shore or offshore habitats. Further, data from a subset of mitogenomes revealed no genetic differentiation between Bering, Chukchi, and Beaufort seas populations for Arctic cod, Saffron cod (Eleginus gracilis), or Walleye pollock (Gadus chalcogrammus). However, we uncovered significant differences in the distribution of microsatellite alleles between the southern Chukchi and central and eastern Beaufort Sea samples of Arctic cod. Finally, using ddRAD-Seq data, we

  4. Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations

    International Nuclear Information System (INIS)

    Arimescu, V.E.; Heins, L.

    2001-01-01

    method, which is computationally efficient, is presented for the evaluation of the global statement. It is proved that, r, the expected fraction of fuel rods exceeding a certain limit is equal to the (1-r)-quantile of the overall distribution of all possible values from all fuel rods. In this way, the problem is reduced to that of estimating a certain quantile of the overall distribution, and the same techniques used for a single rod distribution can be applied again. A simplified test case was devised to verify and validate the methodology. The fuel code was replaced by a transfer function dependent on two input parameters. The function was chosen so that analytic results could be obtained for the distribution of the output. This offers a direct validation for the statistical procedure. Also, a sensitivity study has been performed to analyze the effect on the final outcome of the sampling procedure, simple Monte Carlo and Latin Hypercube Sampling. Also, the effect on the accuracy and bias of the statistical results due to the size of the sample was studied and the conclusion was reached that the results of the statistical methodology are typically conservative. In the end, an example of applying these statistical techniques to a PWR reload is presented together with the improvements and new insights the statistical methodology brings to fuel rod design calculations. (author)

  5. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    Directory of Open Access Journals (Sweden)

    D.D. Lestiani

    2011-08-01

    Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.

  6. New methods applied to the analysis and treatment of ovarian cancer

    International Nuclear Information System (INIS)

    Order, S.E.; Rosenshein, N.B.; Klein, J.L.; Lichter, A.S.; Ettinger, D.S.; Dillon, M.B.; Leibel, S.A.

    1979-01-01

    The development of rigorous staging methods, appreciation of new knowledge concerning ovarian cancer dissemination, and administration of new treatment techniques have been applied to ovarian cancer. The method of staging consists of peritoneal cytology, total abdominal hysterectomy-bilateral salpingo oophorectomy (TAH-BSO), omentectomy, nodal biopsy, diaphragmatic inspection and is coupled with maximal surgical resection. An additional examination being evaluated for usefulness in future staging is intraperitoneal /sup 99m/Tc sulfur colloid scans. Nineteen patients have entered the pilot studies. Sixteen patients (5 Stage 2, 10 Stage 3 micrometastatic, and 1 Stage 4) have been treated with colloidal 32 P, i.p. followed 2 weeks later by split abdominal irradiation (200 rad fractions pelvis-2 hr rest-150 rad upper abdomen) to a total abdominal dose of 3000 rad with a pelvic cone down to 4000 rad. Five of these patients received Phenylalanine mustard (L-PAM) (7 mg/m 2 ) maintenance therapy. The 3 year actuarial survival was 78% and the 3 year disease free actuarial survival 68%. Seven patients were treated with intraperitoneal tumor antisera and 4/7 remain in complete remission as of this writing. The specificity of the antiserum has been demonstrated by immunoelectrophoresis in 4/4 patients, and by live cell fluorescence in 1 patient. Rabbit IgG levels revealed significant increasing titers in 4/6 patients following i.p. antiovarian antiserum. Radiolabeled IgG derived from the antiserum demonstrated tumor localization and correlation with conventional radiograhy and computerized axial tomograhy (CAT) scans in 2 patients studied to date. Biomarker analysis reveals that free secretory protein 6/6, apha globulin 5/6, and CEA (carcinoembryonic antigen) 3/6 were elevated in the 6 patients studied. Two patients whose disease progressed demonstrated elevated levels of all three biomarkers

  7. The Global Survey Method Applied to Ground-level Cosmic Ray Measurements

    Science.gov (United States)

    Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.

    2018-04-01

    The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.

  8. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    International Nuclear Information System (INIS)

    Lestiani, D.D.; Santoso, M.

    2011-01-01

    Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA) and particles induced X-ray emission (PIXE). Particle samples in the PM 2.5 and PM 2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preferred, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment. (author)

  9. Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-08-01

    Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.

  10. Holographic method coupled with an optoelectronic interface applied in the ionizing radiation dosimetry

    International Nuclear Information System (INIS)

    Nicolau-Rebigan, S.; Sporea, D.; Niculescu, V.I.R.

    2000-01-01

    The paper presents a holographic method applied in the ionizing radiation dosimetry. It is possible to use two types of holographic interferometry like as double exposure holographic interferometry, or fast real time holographic interferometry. In this paper the applications of holographic interferometry to ionizing radiation dosimetry are presented. The determination of the accurate value of dose delivered by an ionizing radiation source (released energy per mass unit) is a complex problem which imposes different solutions depending on experimental parameters and it is solved with a double exposure holographic interferometric method associated with an optoelectronic interface and Z80 microprocessor. The method can determine the absorbed integral dose as well as the three-dimensional distribution of dose in given volume. The paper presents some results obtained in radiation dosimetry. Original mathematical relations for integral absorbed dose in irreversible radiolyzing liquids where derived. Irradiation effects can be estimated from the holographic fringes displacement and density. To measure these parameters, the obtained holographic interferograms were picked-up by a closed TV circuit system in such a way that a selected TV line explores the picture along the direction of interest using a special designed interface, a Z80 and our microprocessor system captures data along the selected TV line. When the integral dose is to be measured the microprocessor computes it from the information contained in the fringes distribution, according to the proposed formulae. Integral absorbed dose and spatial dose distribution can be estimated with an accuracy better than 4%. Some advantages of this method are outlined comparatively with conventional method in radiation dosimetry. The paper presents an original holographic set-up with an electronic interface, assisted by a Z80 microprocessor and used for nondestructive testing of transparent objects at the laser wave length

  11. Spatial and seasonal distribution of Arctic aerosols observed by the CALIOP satellite instrument (2006–2012

    Directory of Open Access Journals (Sweden)

    M. Di Pierro

    2013-07-01

    Full Text Available We use retrievals of aerosol extinction from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP onboard the CALIPSO satellite to examine the vertical, horizontal and temporal variability of tropospheric Arctic aerosols during the period 2006–2012. We develop an empirical method that takes into account the difference in sensitivity between daytime and nighttime retrievals over the Arctic. Comparisons of the retrieved aerosol extinction to in situ measurements at Barrow (Alaska and Alert (Canada show that CALIOP reproduces the observed seasonal cycle and magnitude of surface aerosols to within 25 %. In the free troposphere, we find that daytime CALIOP retrievals will only detect the strongest aerosol haze events, as demonstrated by a comparison to aircraft measurements obtained during NASA's ARCTAS mission during April 2008. This leads to a systematic underestimate of the column aerosol optical depth by a factor of 2–10. However, when the CALIOP sensitivity threshold is applied to aircraft observations, we find that CALIOP reproduces in situ observations to within 20% and captures the vertical profile of extinction over the Alaskan Arctic. Comparisons with the ground-based high spectral resolution lidar (HSRL at Eureka, Canada, show that CALIOP and HSRL capture the evolution of the aerosol backscatter vertical distribution from winter to spring, but a quantitative comparison is inconclusive as the retrieved HSRL backscatter appears to overestimate in situ observations by a factor of 2 at all altitudes. In the High Arctic (>70° N near the surface (−1, followed by a sharp decline and a minimum in May–September (1–4 Mm−1, thus providing the first pan-Arctic view of Arctic haze seasonality. The European and Asian Arctic sectors display the highest wintertime extinctions, while the Atlantic sector is the cleanest. Over the Low Arctic (60–70° N near the surface, CALIOP extinctions reach a maximum over land in summer due to

  12. A nuclear-medical method applied for determining the choledochus diameter after cholecystectomy

    International Nuclear Information System (INIS)

    Wolf, M.

    1980-01-01

    54 patients (46 of them females, 8 males) who underwent cholecystectomy at least 4 years ago, were followed up roentgenologically by infusion cholangiography and nuclear-medicinally by quantitative hepatobiliary functional scintiscanning (HBFS). The ROI method applied for HBFS permits to record time/activity curves above the liver parenchyma (A) and the porta of the liver (B). By substracting curve A of curve B with the scale in which A is incorporated in B, a curve B' results, indicating the flow volume through the porta of the liver. The quotient Q=maximum pulse A to B/maximum pulse B to B indicates the portion of the liver parenchyma in the porta curve. The quotient represents a measure for the total volume of the large bile ducts included in the region of the porta of the liver. The quantity 1-Q/Q was put in relation with the roentgenologically determined common bile duct diameters. It resulted that both quantities correlated well, with a correlation coefficient of r=-0.860. Thus, the choledochus diameter can be determined in a primarily functional examination with a precision of 2 mm, a degree which permits the detection of clinically relevant discharge malfunctions. It was not possible to detect peristalsis-dependent phenomena with a dosage of 4-5 mCi 99 mTc-diethyl-IDA, an irradiation dose which was sufficient for answering the clinical questions and could be justified for the patients. (orig.) [de

  13. A new method of identifying target groups for pronatalist policy applied to Australia.

    Directory of Open Access Journals (Sweden)

    Mengni Chen

    Full Text Available A country's total fertility rate (TFR depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.

  14. Multicriterial Hierarchy Methods Applied in Consumption Demand Analysis. The Case of Romania

    Directory of Open Access Journals (Sweden)

    Constantin Bob

    2008-03-01

    Full Text Available The basic information for computing the quantitative statistical indicators, that characterize the demand of industrial products and services are collected by the national statistics organizations, through a series of statistical surveys (most of them periodical and partial. The source for data we used in the present paper is an statistical investigation organized by the National Institute of Statistics, "Family budgets survey" that allows to collect information regarding the households composition, income, expenditure, consumption and other aspects of population living standard. In 2005, in Romania, a person spent monthly in average 391,2 RON, meaning about 115,1 Euros for purchasing the consumed food products and beverage, as well as non-foods products, services, investments and other taxes. 23% of this sum was spent for purchasing the consumed food products and beverages, 21.6% of the total sum was spent for purchasing non-food goods and 18,1%  for payment of different services. There is a discrepancy between the different development regions in Romania, regarding total households expenditure composition. For this reason, in the present paper we applied statistical methods for ranking the various development regions in Romania, using the share of householdsí expenditure on categories of products and services as ranking criteria.

  15. Bending stress modeling of dismountable furniture joints applied with a use of finite element method

    Directory of Open Access Journals (Sweden)

    Milan Šimek

    2009-01-01

    Full Text Available Presented work focuses on bending moment stress modeling of dismountable furniture joints with a use of Finite Element Method. The joints are created from Minifix and Rondorfix cams combined with non-glued wooden dowels. Laminated particleboard 18 mm of thickness is used as a connected material. The connectors were chosen such as the most applied kind in furniture industry for the case furniture. All gained results were reciprocally compared to each other and also in comparison to experimental testing by the mean of stiffness. The non-linear numerical model of chosen joints was successfully created using the software Ansys Workbench. The detailed analysis of stress distribution in the joint was achieved with non-linear numerical simulation. A relationship between numerical si­mu­la­tion and experimental testing was showed by comparison stiffness tangents. A numerical simulation of RTA joint loads also demonstrated the important role of non-glued dowels in the tested joints. The low strength of particleboard in the tension parallel to surface (internal bond is the most likely the cause of the joint failure. Results are applicable for strength designing of furniture with the aid of Computer Aided Engineering.

  16. Commissioning methods applied to the Hunterston 'B' AGR operator training simulator

    International Nuclear Information System (INIS)

    Hacking, D.

    1985-01-01

    The Hunterston 'B' full scope AGR Simulator, built for the South of Scotland Electricity Board by Marconi Instruments, encompasses all systems under direct and indirect control of the Hunterston central control room operators. The resulting breadth and depth of simulation together with the specification for the real time implementation of a large number of highly interactive detailed plant models leads to the classic problem of identifying acceptance and acceptability criteria. For example, whilst the ultimate criterion for acceptability must clearly be that within the context of the training requirement the simulator should be indistinguishable from the actual plant, far more measurable (i.e. less subjective) statements are required if a formal contractual acceptance condition is to be achieved. Within the framework, individual models and processes can have radically different acceptance requirements which therefore reflect on the commissioning approach applied. This paper discusses the application of a combination of quality assurance methods, design code results, plant data, theoretical analysis and operator 'feel' in the commissioning of the Hunterston 'B' AGR Operator Training Simulator. (author)

  17. Life Expectancies Applied to Specific Statuses: a History of the Indicators and the Methods of Calculation {Population, 3, 1998)

    OpenAIRE

    N. Brouard; J.-M. Robine; E. Cambois

    1999-01-01

    Cambois (Emmanuelle), Robin? (Jean-Marie), Brouard (Nicolas).- Life Expectancies Applied to Specific Statuses: A History of the Indicators and the Methods of Calculation Indicators of life expectancy applied to specific statuses, such as the state of health or professional status, were introduced at the end of the 1930s and are currently the object of renewed interest. Because they relate mortality to different domains (health, professional activity) applied life expectancies reflect simultan...

  18. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    Science.gov (United States)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  19. Arctic landfast sea ice

    Science.gov (United States)

    Konig, Christof S.

    Landfast ice is sea ice which forms and remains fixed along a coast, where it is attached either to the shore, or held between shoals or grounded icebergs. Landfast ice fundamentally modifies the momentum exchange between atmosphere and ocean, as compared to pack ice. It thus affects the heat and freshwater exchange between air and ocean and impacts on the location of ocean upwelling and downwelling zones. Further, the landfast ice edge is essential for numerous Arctic mammals and Inupiat who depend on them for their subsistence. The current generation of sea ice models is not capable of reproducing certain aspects of landfast ice formation, maintenance, and disintegration even when the spatial resolution would be sufficient to resolve such features. In my work I develop a new ice model that permits the existence of landfast sea ice even in the presence of offshore winds, as is observed in mature. Based on viscous-plastic as well as elastic-viscous-plastic ice dynamics I add tensile strength to the ice rheology and re-derive the equations as well as numerical methods to solve them. Through numerical experiments on simplified domains, the effects of those changes are demonstrated. It is found that the modifications enable landfast ice modeling, as desired. The elastic-viscous-plastic rheology leads to initial velocity fluctuations within the landfast ice that weaken the ice sheet and break it up much faster than theoretically predicted. Solving the viscous-plastic rheology using an implicit numerical method avoids those waves and comes much closer to theoretical predictions. Improvements in landfast ice modeling can only verified in comparison to observed data. I have extracted landfast sea ice data of several decades from several sources to create a landfast sea ice climatology that can be used for that purpose. Statistical analysis of the data shows several factors that significantly influence landfast ice distribution: distance from the coastline, ocean depth, as

  20. Foreign and domestic experience of economic development of the Arctic territories

    Directory of Open Access Journals (Sweden)

    Dmitriy A. Matviishin

    2017-03-01

    Full Text Available The article deals with the key aspects of the Arctic exploration. There is a brief description of the Arctic Council, as well as strategic goals, objectives, activities and resources used of member countries and observer organizations to achieve these goals. The resource base of the Arctic region is studied. The economic analysis of the Arctic territories by circumpolar states, including the characteristics of resource projects, is arranged. The features of the Russian and foreign approaches to the management of the economy in the Arctic are noted. The method of logical analysis, economic and statistical and historical methods are used in the research. The result is the scientific justification of advantages and potential of domestic experience of development of the Arctic, and also of the necessity of timely adaptation of economic approaches, investment policy and the legislation according to the current chalenges and tendencies.

  1. China in the Arctic: interests, actions and challenges

    Directory of Open Access Journals (Sweden)

    Njord Wegge

    2014-07-01

    Full Text Available This article gives an overview of China’s interest in and approach to the Arctic region. The following questions are raised: 1.Why is China getting involved in the Arctic, 2. How is China’s engagement in the Arctic playing out? 3, What are the most important issues that need to be solved in order for China to increase its relevance and importance as a political actor and partner in the Arctic. In applying a rationalist approach when answering the research questions, I identify how China in the last few years increasingly has been accepted as a legitimate stakeholder in the Arctic, with important stakes and activities in areas such as shipping, resource utilization and environmental science.  The article concludes with pointing out some issues that remain to be solved including Chinas role in issues of global politics, the role of observers in the Arctic Council as well as pointing out how China itself needs to decide important aspects of their future role in the region.

  2. Analysis of flow boiling heat transfer in narrow annular gaps applying the design of experiments method

    Directory of Open Access Journals (Sweden)

    Gunar Boye

    2015-06-01

    Full Text Available The axial heat transfer coefficient during flow boiling of n-hexane was measured using infrared thermography to determine the axial wall temperature in three geometrically similar annular gaps with different widths (s = 1.5 mm, s = 1 mm, s = 0.5 mm. During the design and evaluation process, the methods of statistical experimental design were applied. The following factors/parameters were varied: the heat flux q · = 30 − 190 kW / m 2 , the mass flux m · = 30 − 700 kg / m 2 s , the vapor quality x · = 0 . 2 − 0 . 7 , and the subcooled inlet temperature T U = 20 − 60 K . The test sections with gap widths of s = 1.5 mm and s = 1 mm had very similar heat transfer characteristics. The heat transfer coefficient increases significantly in the range of subcooled boiling, and after reaching a maximum at the transition to the saturated flow boiling, it drops almost monotonically with increasing vapor quality. With a gap width of 0.5 mm, however, the heat transfer coefficient in the range of saturated flow boiling first has a downward trend and then increases at higher vapor qualities. For each test section, two correlations between the heat transfer coefficient and the operating parameters have been created. The comparison also shows a clear trend of an increasing heat transfer coefficient with increasing heat flux for test sections s = 1.5 mm and s = 1.0 mm, but with increasing vapor quality, this trend is reversed for test section 0.5 mm.

  3. A new method of identifying target groups for pronatalist policy applied to Australia

    Science.gov (United States)

    Chen, Mengni; Lloyd, Chris J.

    2018-01-01

    A country’s total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup’s potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies. PMID:29425220

  4. An implementation of the diagnosis method DYANA, applied to a combined heat-power device

    Energy Technology Data Exchange (ETDEWEB)

    Van der Neut, F.

    1993-10-01

    The development and implementation of the monitor-and-diagnosis method DYANA is presented. This implementation is applied to and tested on a combined heat and power generating device (CHP). The steps that have been taken in realizing this implementation are evaluated into detail . In chapter two the theory behind DYANA is recapitulated. Attention is paid to the basic theory of diagnoses, and the steps of the path from this theory to the algorithm DYANA are revealed. These steps include the hierarchical approach, and explain the following features of DYANA: a) the use of best-first dynamic model zooming based on heuristics with respect to parsimony of the number of components within the diagnoses, b) the use of consistency of fault models with observations to focus on the most likely diagnoses, and c) the use of online diagnosis: the current set of diagnoses is incrementally updated after a new observation of the system is made. In chapter three the relevant aspects of the system to be diagnosed, the CHP, are dealt with in detail. An explanation is given of the broad working of the CHP, its hierarchical structure and mathematical representation are given, CHP observation is commented, and some possible forms of fault models are stated. In chapter four the pseudocode of the implementation, developed for DYANA, is presented. The pseudocode consists of two parts: the monitoring process (using numerical simulation), and the diagnostic process. The differences between the pseudocode and the actual implementation are mentioned. The CHP will then be monitored and diagnosed with this algorithm and results of this test are given in chapter five. An actual implementation of DYANA can be found in a separately supplied appendix, the Programme Appendix. The implementation of the monitoring process is meant only for this example of the CHP. The code for the diagnostic process can be easily adjusted for diagnosing other devices, such as electronic circuits. The language is Pascal.

  5. Proposal of inspection method of radiation protection applied to nuclear medicine establishments

    International Nuclear Information System (INIS)

    Mendes, Leopoldino da Cruz Gouveia

    2003-01-01

    The principal objective of this paper is to implement a method of an impartial and efficient inspection, due to a correct and secure dose of ionizing radiation in the field of Nuclear Medicine. The Radiological Protection Model was tested in 113 Nuclear Medicine Services all over the country, according to a biannual analysis frequency (1996, 1998, 2000 and 2002). The data sheet comprised general information about the structure of the NMS and a technical approach. In the analytical process, a methodology of inputting different importance levels to each of the 82 features was adopted, based on the risk factors stated in the CNEN NE's and in the IAEA recommendations, as well. From this point of view, as a feature does not fit one of the rules above, it will correspond to a radioprotection fault and be imparted a grade. The sum of those grades, classified the NMS in one of the three different ranges, as follows: - operating without restriction - 100 points and below- operating with restriction - between 100 and 300 points - temporary shutdown - above and equal to 300 points. The allowance of the second group to carry on operating should be attached to a defined and restricted period of time (six to twelve months), supposed large enough to the NMS solving the problems being new evaluation proceeded then. The NMS's classified in the third group are supposed to go back into operation only when fit all the pending radioprotection requirements. Until the next regular evaluation, meanwhile a multiplication factor 2 n was applied to the recalcitrant NMS s where n is the number of unwilling occurrences. The previous establishment of those items of radioprotection, with its respective grade, excluded subjective and personal values in the judgement and technical evaluation of the institutions. (author)

  6. Arctic industrial activities compilation

    International Nuclear Information System (INIS)

    1991-01-01

    Most industrial activities in the Beaufort Sea region are directly or indirectly associated with the search for oil and gas. Activities in marine areas include dredging, drilling, seismic and sounding surveys, island/camp maintenance, vessel movements, helicoptor and fixed-wind flights, and ice-breaking. This inventory contains a summary of chemical usage at 119 offshore drilling locations in the Beaufort Sea, Arctic Islands and Davis Straight of the Canadian Arctic between 1973 and 1987. Data are graphically displayed for evaluating patterns of drill waste discharge in the three offshore drilling areas. These displays include a comparison of data obtained from tour sheets and well history records, summaries of drilling mud chemicals used by year, well and oil company, frequency of wells drilled as a function of water depth, and offshore drilling activity by year, company, and platform. 21 refs., 104 figs., 2 tabs

  7. Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain

    KAUST Repository

    Belkhatir, Zehor

    2018-05-01

    Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a

  8. Disparities in Arctic Health

    Centers for Disease Control (CDC) Podcasts

    2008-02-04

    Life at the top of the globe is drastically different. Harsh climate devoid of sunlight part of the year, pockets of extreme poverty, and lack of physical infrastructure interfere with healthcare and public health services. Learn about the challenges of people in the Arctic and how research and the International Polar Year address them.  Created: 2/4/2008 by Emerging Infectious Diseases.   Date Released: 2/20/2008.

  9. Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling

    Science.gov (United States)

    Fink, P. W.; Wilton, D. R.; Dobbins, J. A.

    2002-01-01

    In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required

  10. Four Methods for Completing the Conceptual Development Phase of Applied Theory Building Research in HRD

    Science.gov (United States)

    Storberg-Walker, Julia; Chermack, Thomas J.

    2007-01-01

    The purpose of this article is to describe four methods for completing the conceptual development phase of theory building research for single or multiparadigm research. The four methods selected for this review are (1) Weick's method of "theorizing as disciplined imagination" (1989); (2) Whetten's method of "modeling as theorizing" (2002); (3)…

  11. An approach for prediction of petroleum production facility performance considering Arctic influence factors

    International Nuclear Information System (INIS)

    Gao Xueli; Barabady, Javad; Markeset, Tore

    2010-01-01

    As the oil and gas (O and G) industry is increasing the focus on petroleum exploration and development in the Arctic region, it is becoming increasingly important to design exploration and production facilities to suit the local operating conditions. The cold and harsh climate, the long distance from customer and suppliers' markets, and the sensitive environment may have considerable influence on the choice of design solutions and production performance characteristics such as throughput capacity, reliability, availability, maintainability, and supportability (RAMS) as well as operational and maintenance activities. Due to this, data and information collected for similar systems used in a normal climate may not be suitable. Hence, it is important to study and develop methods for prediction of the production performance characteristics during the design and operation phases. The aim of this paper is to present an approach for prediction of the production performance for oil and gas production facilities considering influencing factors in Arctic conditions. The proportional repair model (PRM) is developed in order to predict repair rate in Arctic conditions. The model is based on the proportional hazard model (PHM). A simple case study is used to demonstrate how the proposed approach can be applied.

  12. Biodiversity of arctic marine fishes

    DEFF Research Database (Denmark)

    Mecklenburg, Catherine W.; Møller, Peter Rask; Steinke, Dirk

    2011-01-01

    Taxonomic and distributional information on each fish species found in arctic marine waters is reviewed, and a list of families and species with commentary on distributional records is presented. The list incorporates results from examination of museum collections of arctic marine fishes dating b...

  13. Mining in the European Arctic

    NARCIS (Netherlands)

    van Dam, Kim; Scheepstra, Annette; Gille, Johan; Stępień, Adam; Koivurova, Timo

    The European Arctic is currently experiencing an upsurge in mining activities, but future developments will be highly sensitive to mineral price fluctuations. The EU is a major consumer and importer of Arctic raw materials. As the EU is concerned about the security of supply, it encourages domestic

  14. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  15. Arctic Nuclear Waste Assessment Program

    International Nuclear Information System (INIS)

    Edson, R.

    1995-01-01

    The Arctic Nuclear Waste Assessment Program (ANWAP) was initiated in 1993 as a result of US congressional concern over the disposal of nuclear materials by the former Soviet Union into the Arctic marine environment. The program is comprised of appr. 70 different projects. To date appr. ten percent of the funds has gone to Russian institutions for research and logistical support. The collaboration also include the IAEA International Arctic Seas Assessment Program. The major conclusion from the research to date is that the largest signals for region-wide radionuclide contamination in the Arctic marine environment appear to arise from the following: 1) atmospheric testing of nuclear weapons, a practice that has been discontinued; 2) nuclear fuel reprocessing wastes carried in the Arctic from reprocessing facilities in Western Europe, and 3) accidents such as Chernobyl and the 1957 explosion at Chelyabinsk-65

  16. A global method for calculating plant CSR ecological strategies applied across biomes world-wide

    NARCIS (Netherlands)

    Pierce, S.; Negreiros, D.; Cerabolini, B.E.L.; Kattge, J.; Díaz, S.; Kleyer, M.; Shipley, B.; Wright, S.J.; Soudzilovskaia, N.A.; Onipchenko, V.G.; van Bodegom, P.M.; Frenette-Dussault, C.; Weiher, E.; Pinho, B.X.; Cornelissen, J.H.C.; Grime, J.P.; Thompson, K.; Hunt, R.; Wilson, P.J.; Buffa, G.; Nyakunga, O.C.; Reich, P.B.; Caccianiga, M.; Mangili, F.; Ceriani, R.M.; Luzzaro, A.; Brusa, G.; Siefert, A.; Barbosa, N.P.U.; Chapin III, F.S.; Cornwell, W.K.; Fang, Jingyun; Wilson Fernandez, G.; Garnier, E.; Le Stradic, S.; Peñuelas, J.; Melo, F.P.L.; Slaviero, A.; Tabarrelli, M.; Tampucci, D.

    2017-01-01

    Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA)

  17. Applying Item Response Theory Methods to Examine the Impact of Different Response Formats

    Science.gov (United States)

    Hohensinn, Christine; Kubinger, Klaus D.

    2011-01-01

    In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…

  18. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    Science.gov (United States)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  19. Applying Item Response Theory methods to design a learning progression-based science assessment

    Science.gov (United States)

    Chen, Jing

    Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all

  20. AMAP Assessment 2013: Arctic Ocean acidification

    Science.gov (United States)

    2013-01-01

    This assessment report presents the results of the 2013 AMAP Assessment of Arctic Ocean Acidification (AOA). This is the first such assessment dealing with AOA from an Arctic-wide perspective, and complements several assessments that AMAP has delivered over the past ten years concerning the effects of climate change on Arctic ecosystems and people. The Arctic Monitoring and Assessment Programme (AMAP) is a group working under the Arctic Council. The Arctic Council Ministers have requested AMAP to: - produce integrated assessment reports on the status and trends of the conditions of the Arctic ecosystems;

  1. Evaluation of Two Fitting Methods Applied for Thin-Layer Drying of Cape Gooseberry Fruits

    Directory of Open Access Journals (Sweden)

    Erkan Karacabey

    Full Text Available ABSTRACT Drying data of cape gooseberry was used to compare two fitting methods: namely 2-step and 1-step methods. Literature data was also used to confirm the results. To demonstrate the applicability of these methods, two primary models (Page, Two-term-exponential were selected. Linear equation was used as secondary model. As well-known from the previous modelling studies on drying, 2-step method required at least two regressions: One is primary model and one is secondary (if you have only one environmental condition such as temperature. On the other hand, one regression was enough for 1-step method. Although previous studies on kinetic modelling of drying of foods were based on 2-step method, this study indicated that 1-step method may also be a good alternative with some advantages such as drawing an informative figure and reducing time of calculations.

  2. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  3. Sea ice roughness: the key for predicting Arctic summer ice albedo

    Science.gov (United States)

    Landy, J.; Ehn, J. K.; Tsamados, M.; Stroeve, J.; Barber, D. G.

    2017-12-01

    Although melt ponds on Arctic sea ice evolve in stages, ice with smoother surface topography typically allows the pond water to spread over a wider area, reducing the ice-albedo and accelerating further melt. Building on this theory, we simulated the distribution of meltwater on a range of statistically-derived topographies to develop a quantitative relationship between premelt sea ice surface roughness and summer ice albedo. Our method, previously applied to ICESat observations of the end-of-winter sea ice roughness, could account for 85% of the variance in AVHRR observations of the summer ice-albedo [Landy et al., 2015]. Consequently, an Arctic-wide reduction in sea ice roughness over the ICESat operational period (from 2003 to 2008) explained a drop in ice-albedo that resulted in a 16% increase in solar heat input to the sea ice cover. Here we will review this work and present new research linking pre-melt sea ice surface roughness observations from Cryosat-2 to summer sea ice albedo over the past six years, examining the potential of winter roughness as a significant new source of sea ice predictability. We will further evaluate the possibility for high-resolution (kilometre-scale) forecasts of summer sea ice albedo from waveform-level Cryosat-2 roughness data in the landfast sea ice zone of the Canadian Arctic. Landy, J. C., J. K. Ehn, and D. G. Barber (2015), Albedo feedback enhanced by smoother Arctic sea ice, Geophys. Res. Lett., 42, 10,714-10,720, doi:10.1002/2015GL066712.

  4. Geochronology and geochemistry by nuclear tracks method: some utilization examples in geologic applied

    International Nuclear Information System (INIS)

    Poupeau, G.; Soliani Junior, E.

    1988-01-01

    This article discuss some applications of the 'nuclear tracks method' in geochronology, geochemistry and geophysic. In geochronology, after rapid presentation of the dating principles by 'Fission Track' and the kinds of geological events mensurable by this method, is showed some application in metallogeny and in petroleum geolocy. In geochemistry the 'fission tracks' method utilizations are related with mining prospecting and uranium prospecting. In geophysics an important application is the earthquake prevision, through the Ra 222 emanations continous control. (author) [pt

  5. Purists need not apply: the case for pragmatism in mixed methods research.

    Science.gov (United States)

    Florczak, Kristine L

    2014-10-01

    The purpose of this column is to describe several different ways of conducting mixed method research. The paradigms that underpin both qualitative and quantitative research are also considered along with a cursory review of classical pragmatism as it relates conducting mixed methods studies. Finally, the idea of loosely coupled systems as a means to support mixed methods studies is proposed along with several caveats to researchers who desire to use this new way of obtaining knowledge. © The Author(s) 2014.

  6. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  7. Applying formal method to design of nuclear power plant embedded protection system

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Kim, Il Gon; Sung, Chang Hoon; Choi, Jin Young; Lee, Na Young

    2001-01-01

    Nuclear power embedded protection systems is a typical safety-critical system, which detects its failure and shutdowns its operation of nuclear reactor. These systems are very dangerous so that it absolutely requires safety and reliability. Therefore nuclear power embedded protection system should fulfill verification and validation completely from the design stage. To develop embedded system, various V and V method have been provided and especially its design using Formal Method is studied in other advanced country. In this paper, we introduce design method of nuclear power embedded protection systems using various Formal-Method in various respect following nuclear power plant software development guideline

  8. Researching and applying the MRSS method in fuel assembly mechanical design

    International Nuclear Information System (INIS)

    Li Jiwei; Zhou Yunqing; Liu Jiazheng; Tong Xing; Zheng Yixiong

    2014-01-01

    Tolerance analysis is an important part in mechanical design of fuel assemblies. With introduction of the MRSS method and the process capability, the relation between the two was discussed. The conditions of the MRSS method were limited by calculating the protrusion of the Outer Strap Spring of the Grid. The results show that the MRSS method shall be preferred used in linear tolerance analysis in fuel assemblies with numbers of dimensions by controlling process capability and considering sensitivities and modified factor. The results can be approbated both by designers and manufacturers with the MRSS method used. (authors)

  9. Reliability analysis of reactor systems by applying probability method; Analiza pouzdanosti reaktorskih sistema primenom metoda verovatnoce

    Energy Technology Data Exchange (ETDEWEB)

    Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1974-12-15

    Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.

  10. An Analysis of Methods Section of Research Reports in Applied Linguistics

    OpenAIRE

    Patrícia Marcuzzo

    2011-01-01

    This work aims at identifying analytical categories and research procedures adopted in the analysis of research article in Applied Linguistics/EAP in order to propose a systematization of the research procedures in Genre Analysis. For that purpose, 12 research reports and interviews with four authors were analyzed. The analysis showed that the studies are concentrated on the investigation of the macrostructure or on the microstructure of research articles in different fields. Studies about th...

  11. Methods of the professional-applied physical preparation of students of higher educational establishments of economic type

    Directory of Open Access Journals (Sweden)

    Maliar E.I.

    2010-11-01

    Full Text Available Is considered the directions of professionally-applied physical preparation of students with the prevailing use of facilities of football. Are presented the methods of professionally-applied physical preparation of students. It is indicated that application of method of the circular training is rendered by an assistance development of discipline, honesty, honesty, rational use of time. Underline, that in teaching it is necessary to provide a short cut to mastering of the planned knowledge, abilities and skills, improvement of physical qualities.

  12. Statistical methods applied to gamma-ray spectroscopy algorithms in nuclear security missions.

    Science.gov (United States)

    Fagan, Deborah K; Robinson, Sean M; Runkle, Robert C

    2012-10-01

    Gamma-ray spectroscopy is a critical research and development priority to a range of nuclear security missions, specifically the interdiction of special nuclear material involving the detection and identification of gamma-ray sources. We categorize existing methods by the statistical methods on which they rely and identify methods that have yet to be considered. Current methods estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty, which may be significantly more complex. Thus, significantly improving algorithm performance may require greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods, could reduce decision uncertainty by rigorously and comprehensively incorporating all sources of uncertainty. Application of such methods should further meet the needs of nuclear security missions by improving upon the existing numerical infrastructure for which these analyses have not been conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. A direct algebraic method applied to obtain complex solutions of some nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.

  14. Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Sigthorsson, Oskar; Elmegaard, Brian

    2017-01-01

    In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the q...

  15. Heterogeneity among violence-exposed women: applying person-oriented research methods.

    Science.gov (United States)

    Nurius, Paula S; Macy, Rebecca J

    2008-03-01

    Variability of experience and outcomes among violence-exposed people pose considerable challenges toward developing effective prevention and treatment protocols. To address these needs, the authors present an approach to research and a class of methodologies referred to as person oriented. Person-oriented tools support assessment of meaningful patterns among people that distinguish one group from another, subgroups for whom different interventions are indicated. The authors review the conceptual base of person-oriented methods, outline their distinction from more familiar variable-oriented methods, present descriptions of selected methods as well as empirical applications of person-oriented methods germane to violence exposure, and conclude with discussion of implications for future research and translation between research and practice. The authors focus on violence against women as a population, drawing on stress and coping theory as a theoretical framework. However, person-oriented methods hold utility for investigating diversity among violence-exposed people's experiences and needs across populations and theoretical foundations.

  16. Applying cognitive developmental psychology to middle school physics learning: The rule assessment method

    Science.gov (United States)

    Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.

    2013-01-01

    Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.

  17. Applying the Taguchi method to river water pollution remediation strategy optimization.

    Science.gov (United States)

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  18. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    Directory of Open Access Journals (Sweden)

    Tsung-Ming Yang

    2014-04-01

    Full Text Available Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  19. Study on State Transition Method Applied to Motion Planning for a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Xuyang Wang

    2008-11-01

    Full Text Available This paper presents an approach of motion planning for a humanoid robot using a state transition method. In this method, motion planning is simplified by introducing a state-space to describe the whole motion series. And each state in the state-space corresponds to a contact state specified during the motion. The continuous motion is represented by a sequence of discrete states. The concept of the transition between two neighboring states, that is the state transition, can be realized by using some traditional path planning methods. Considering the dynamical stability of the robot, a state transition method based on search strategy is proposed. Different sets of trajectories are generated by using a variable 5th-order polynomial interpolation method. After quantifying the stabilities of these trajectories, the trajectories with the largest stability margin are selected as the final state transition trajectories. Rising motion process is exemplified to validate the method and the simulation results show the proposed method to be feasible and effective.

  20. CO2 (carbon dioxide) fixation by applying new chemical absorption-precipitation methods

    International Nuclear Information System (INIS)

    Park, Sangwon; Lee, Min-Gu; Park, Jinwon

    2013-01-01

    CO 2 (carbon dioxide) is the most common greenhouse gas and most of it is emitted from human activities. The methods for CO 2 emission reduction can be divided into physical, chemical, and biochemical methods. Among the physical and chemical methods, CCS (carbon capture and storage) is a well-known reducing technology. However, this method has many disadvantages including the required storage area. In general, CCS requires capture and storage processes. In this study, we propose a method for reusing the absorbed CO 2 either in nature or in industry. The emitted CO 2 was converted into CO 3 2− using a conversion solution, and then made into a carbonate by combining the conversion solution with metal ions at normal temperature and pressure. The resulting carbonate was analyzed using FT-IR (Fourier transform infrared spectroscopy) and XRD (X-ray diffraction). We verified the formation of a solid consisting of calcite and vaterite. In addition, the conversion solution that was used could be reused in the same process of CCS technology. Our study demonstrates a successful method of reducing and reusing emitted CO 2 , thereby making CO 2 a potential future resource. - Highlights: • This study focused on a new CO 2 fixation process method. • In CCS technology, the desorption process requires high thermal energy consumption. • This new method does not require a desorption process because the CO 2 is accomplished through CaCO 3 crystallization. • A new absorption method is possible instead of the conventional absorption-desorption process. • This is not only a rapid reaction for fixing CO 2 , but also economically feasible

  1. Determination of the calcium salt content on the trunk skeleton and on the peripheral bone applying the Compton backscattering method and the ashing method

    International Nuclear Information System (INIS)

    Schmitt, K.W.

    1974-01-01

    The Compton backscattering method is applied to determine the bone decalcification. Post mortal excised calcanei and vertebral bodies of 50 people are taken as investigation objects which are examined for their calcium salt content and are then ashed for control measurement. The results show that the method would be better suited to early diagnosis of calcipenic osteopathy than the densitometric method used today on extremity bones. (ORU/LH) [de

  2. Inuit outside the Arctic : Migration, identity and perceptions

    NARCIS (Netherlands)

    Terpstra, Tekke

    2015-01-01

    Today many Inuit live outside the Arctic. This research deals with the experiences of these migrants. The focus is on Greenlanders in Denmark, but their experiences are compared to those of Inuit in southern Canada. However, various of the themes discussed in this study also apply to other groups of

  3. Three-group albedo method applied to the diffusion phenomenon with up-scattering of neutrons

    International Nuclear Information System (INIS)

    Terra, Andre M. Barge Pontes Torres; Silva, Jorge A. Valle da; Cabral, Ronaldo G.

    2007-01-01

    The main objective of this research is to develop a three-group neutron Albedo algorithm considering the up-scattering of neutrons in order to analyse the diffusion phenomenon in nonmultiplying media. The neutron Albedo method is an analytical method that does not try to solve describing explicit equations for the neutron fluxes. Thus the neutron Albedo methodology is very different from the conventional methodology, as the neutron diffusion theory model. Graphite is analyzed as a model case. One major application is in the determination of the nonleakage probabilities with more understandable results in physical terms than conventional radiation transport method calculations. (author)

  4. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    : The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

  5. ReSOLV: Applying Cryptocurrency Blockchain Methods to Enable Global Cross-Platform Software License Validation

    Directory of Open Access Journals (Sweden)

    Alan Litchfield

    2018-05-01

    Full Text Available This paper presents a method for a decentralised peer-to-peer software license validation system using cryptocurrency blockchain technology to ameliorate software piracy, and to provide a mechanism for software developers to protect copyrighted works. Protecting software copyright has been an issue since the late 1970s and software license validation has been a primary method employed in an attempt to minimise software piracy and protect software copyright. The method described creates an ecosystem in which the rights and privileges of participants are observed.

  6. A pulse stacking method of particle counting applied to position sensitive detection

    International Nuclear Information System (INIS)

    Basilier, E.

    1976-03-01

    A position sensitive particle counting system is described. A cyclic readout imaging device serves as an intermediate information buffer. Pulses are allowed to stack in the imager at very high counting rates. Imager noise is completely discriminated to provide very wide dynamic range. The system has been applied to a detector using cascaded microchannel plates. Pulse height spread produced by the plates causes some loss of information. The loss is comparable to the input loss of the plates. The improvement in maximum counting rate is several hundred times over previous systems that do not permit pulse stacking. (Auth.)

  7. A comparative analysis of three metaheuristic methods applied to fuzzy cognitive maps learning

    Directory of Open Access Journals (Sweden)

    Bruno A. Angélico

    2013-12-01

    Full Text Available This work analyses the performance of three different population-based metaheuristic approaches applied to Fuzzy cognitive maps (FCM learning in qualitative control of processes. Fuzzy cognitive maps permit to include the previous specialist knowledge in the control rule. Particularly, Particle Swarm Optimization (PSO, Genetic Algorithm (GA and an Ant Colony Optimization (ACO are considered for obtaining appropriate weight matrices for learning the FCM. A statistical convergence analysis within 10000 simulations of each algorithm is presented. In order to validate the proposed approach, two industrial control process problems previously described in the literature are considered in this work.

  8. Review on characterization methods applied to HTR-fuel element components

    International Nuclear Information System (INIS)

    Koizlik, K.

    1976-02-01

    One of the difficulties which on the whole are of no special scientific interest, but which bear a lot of technical problems for the development and production of HTR fuel elements is the proper characterization of the element and its components. Consequently a lot of work has been done during the past years to develop characterization procedures for the fuel, the fuel kernel, the pyrocarbon for the coatings, the matrix and graphite and their components binder and filler. This paper tries to give a status report on characterization procedures which are applied to HTR fuel in KFA and cooperating institutions. (orig.) [de

  9. Neutron tomography as a reverse engineering method applied to the IS-60 Rover gas turbine

    CSIR Research Space (South Africa)

    Roos, TH

    2011-09-01

    Full Text Available Probably the most common method of reverse engineering in mechanical engineering involves measuring the physical geometry of a component using a coordinate measuring machine (CMM). Neutron tomography, in contrast, is used primarily as a non...

  10. Considerations on the question of applying ion exchange or reverse osmosis methods in boiler feedwater processing

    International Nuclear Information System (INIS)

    Marquardt, K.; Dengler, H.

    1976-01-01

    This consideration is to show that the method of reverse osmosis presents in many cases an interesting and economical alternative to part and total desolination plants using ion exchangers. The essential advantages of the reverse osmosis are a higher degree of automization, no additional salting of the removed waste water, small constructional volume of the plant as well as favourable operational costs with increasing salt content of the crude water to be processed. As there is a relatively high salt breakthrough compared to the ion exchange method, the future tendency in boiler feedwater processing will be more towards a combination of methods of reverse osmosis and post-purification through continuous ion exchange methods. (orig./LH) [de

  11. About one counterexample of applying method of splitting in modeling of plating processes

    Science.gov (United States)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.

    2018-05-01

    The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.

  12. Applying Formal Methods to an Information Security Device: An Experience Report

    National Research Council Canada - National Science Library

    Kirby, Jr, James; Archer, Myla; Heitmeyer, Constance

    1999-01-01

    .... This paper describes a case study in which the SCR method was used to specify and analyze a different class of system, a cryptographic system called CD, which must satisfy a large set of security properties...

  13. A Belief Network Decision Support Method Applied to Aerospace Surveillance and Battle Management Projects

    National Research Council Canada - National Science Library

    Staker, R

    2003-01-01

    This report demonstrates the application of a Bayesian Belief Network decision support method for Force Level Systems Engineering to a collection of projects related to Aerospace Surveillance and Battle Management...

  14. Computational performance of Free Mesh Method applied to continuum mechanics problems

    Science.gov (United States)

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  15. Development of characterization methods applied to radioactive wastes and waste packages

    International Nuclear Information System (INIS)

    Guy, C.; Bienvenu, Ph.; Comte, J.; Excoffier, E.; Dodi, A.; Gal, O.; Gmar, M.; Jeanneau, F.; Poumarede, B.; Tola, F.; Moulin, V.; Jallu, F.; Lyoussi, A.; Ma, J.L.; Oriol, L.; Passard, Ch.; Perot, B.; Pettier, J.L.; Raoux, A.C.; Thierry, R.

    2004-01-01

    This document is a compilation of R and D studies carried out in the framework of the axis 3 of the December 1991 law about the conditioning and storage of high-level and long lived radioactive wastes and waste packages, and relative to the methods of characterization of these wastes. This R and D work has permitted to implement and qualify new methods (characterization of long-lived radioelements, high energy imaging..) and also to improve the existing methods by lowering detection limits and reducing uncertainties of measured data. This document is the result of the scientific production of several CEA laboratories that use complementary techniques: destructive methods and radiochemical analyses, photo-fission and active photonic interrogation, high energy imaging systems, neutron interrogation, gamma spectroscopy and active and passive imaging techniques. (J.S.)

  16. Considerations on Applying the Method for Assessing the Level of Safety at Work

    Directory of Open Access Journals (Sweden)

    Costica Bejinariu

    2017-07-01

    Full Text Available The application of the method for assessing the level of safety at work starts with a document that contains the cover page, the description of the company (name, location, core business, organizational chart etc., description of the work system, a detailed list of its components, and a brief description of the assessment method. It continues with a Microsoft Excel document, which represents the actual application of the method and, finally, there is another document presenting conclusions, proposals, and prioritizations, which leads to the execution of the Prevention and Protection Plan. The present paper approaches the issue of developing the Microsoft Excel document, an essential part of the method for assessing the level of safety at work. The document is divided into a variable number of worksheets, showing the risk categories of general, specific, and management.

  17. Numerical analysis of the immersed boundary method applied to the flow around a forced oscillating cylinder

    International Nuclear Information System (INIS)

    Pinto, L C; Silvestrini, J H; Schettini, E B C

    2011-01-01

    In present paper, Navier-Stokes and Continuity equations for incompressible flow around an oscillating cylinder were numerically solved. Sixth order compact difference schemes were used to solve the spatial derivatives, while the time advance was carried out through second order Adams Bashforth accurate scheme. In order to represent the obstacle in the flow, the Immersed Boundary Method was adopted. In this method a force term is added to the Navier-Stokes equations representing the body. The simulations present results regarding the hydrodynamic coefficients and vortex wakes in agreement to experimental and numerical previous works and the physical lock-in phenomenon was identified. Comparing different methods to impose the IBM, it can be concluded that no alterations regarding the vortex shedding mode were observed. The Immersed Boundary Method techniques used here can represent the surface of an oscillating cylinder in the flow.

  18. Review on applied foods and analyzed methods in identification testing of irradiated foods

    International Nuclear Information System (INIS)

    Kim, Kwang Hoon; Lee, Hoo Chul; Park, Sung Hyun; Kim, Soo Jin; Kim, Kwan Soo; Jeong, Il Yun; Lee, Ju Woon; Yook, Hong Sun

    2010-01-01

    Identification methods of irradiated foods have been adopted as official test by EU and Codex. PSL, TL, ESR and GC/MS methods were registered in Korea food code on 2009 and put in force as control system of verification for labelling of food irradiation. But most generally applicable PSL and TL methods are specified applicable foods according to domestic approved items. Unlike these specifications, foods unpermitted in Korea are included in applicable items of ESR and GC/MS methods. According to recent research data, numerous food groups are possible to effective legal control by identification and these items are demanded to permit regulations for irradiation additionally. Especially, the prohibition of irradiation for meats or seafoods is not harmonized with international standards and interacts as trade friction or industrial restrictions due to unprepared domestic regulation. Hence, extension of domestic legal permission for food irradiation can contrive to related industrial development and also can reduce trade friction and enhance international competitiveness

  19. Applied Warfighter Ergonomics: A Research Method for Evaluating Military Individual Equipment

    National Research Council Canada - National Science Library

    Takagi, Koichi

    2005-01-01

    The objective of this research effort is to design and implement a laboratory and establish a research method focused on scientific evaluation of human factors considerations for military individual...

  20. Comparative advantages and limitations of the basic metrology methods applied to the characterization of nanomaterials.

    Science.gov (United States)

    Linkov, Pavel; Artemyev, Mikhail; Efimov, Anton E; Nabiev, Igor

    2013-10-07

    Fabrication of modern nanomaterials and nanostructures with specific functional properties is both scientifically promising and commercially profitable. The preparation and use of nanomaterials require adequate methods for the control and characterization of their size, shape, chemical composition, crystalline structure, energy levels, pathways and dynamics of physical and chemical processes during their fabrication and further use. In this review, we discuss different instrumental methods for the analysis and metrology of materials and evaluate their advantages and limitations at the nanolevel.

  1. Self-consistent field variational cellular method as applied to the band structure calculation of sodium

    International Nuclear Information System (INIS)

    Lino, A.T.; Takahashi, E.K.; Leite, J.R.; Ferraz, A.C.

    1988-01-01

    The band structure of metallic sodium is calculated, using for the first time the self-consistent field variational cellular method. In order to implement the self-consistency in the variational cellular theory, the crystal electronic charge density was calculated within the muffin-tin approximation. The comparison between our results and those derived from other calculations leads to the conclusion that the proposed self-consistent version of the variational cellular method is fast and accurate. (author) [pt

  2. Arctic Tides from GPS on sea-ice

    DEFF Research Database (Denmark)

    Kildegaard Rose, Stine; Skourup, Henriette; Forsberg, René

    2013-01-01

    The presence of sea-ice in the Arctic Ocean plays a significant role in the Arctic climate. Sea-ice dampens the ocean tide amplitude with the result that global tidal models perform less accurately in the polar regions. This paper presents, a kinematic processing of global positioning system (GPS....... The results show coherence between the GPS buoy measurements, and the tide model. Furthermore, we have proved that the reference ellipsoid of WGS84, can be interpolated to the tidal defined zero level by applying geophysical corrections to the GPS data....

  3. Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion

    Directory of Open Access Journals (Sweden)

    Hui Li

    2017-01-01

    Full Text Available Since WorldView-2 (WV-2 images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS and panchromatic (PAN bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies.

  4. Sustainable Assessment of Aerosol Pollution Decrease Applying Multiple Attribute Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2016-06-01

    Full Text Available Air pollution with various materials, particularly with aerosols, increases with the advances in technological development. This is a complicated global problem. One of the priorities in achieving sustainable development is the reduction of harmful technological effects on the environment and human health. It is a responsibility of researchers to search for effective methods of reducing pollution. The reliable results can be obtained by combining the approaches used in various fields of science and technology. This paper aims to demonstrate the effectiveness of the multiple attribute decision-making (MADM methods in investigating and solving the environmental pollution problems. The paper presents the study of the process of the evaporation of a toxic liquid based on using the MADM methods. A schematic view of the test setup is presented. The density, viscosity, and rate of the released vapor flow are measured and the dependence of the variation of the solution concentration on its temperature is determined in the experimental study. The concentration of hydrochloric acid solution (HAS varies in the range from 28% to 34%, while the liquid is heated from 50 to 80 °C. The variations in the parameters are analyzed using the well-known VIKOR and COPRAS MADM methods. For determining the criteria weights, a new CILOS (Criterion Impact LOSs method is used. The experimental results are arranged in the priority order, using the MADM methods. Based on the obtained data, the technological parameters of production, ensuring minimum environmental pollution, can be chosen.

  5. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    Directory of Open Access Journals (Sweden)

    Fernando Gimeno Bellver

    Full Text Available In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems.The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software.Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper. Keywords: Electrical analogy, Network Simulation Method, Josephson junction, Chaos indicator, Fast Fourier Transform

  6. The Effect of Medicine Knowledge on the Methods Applied for Lowering Blood Pressure in Patients with Hypertension

    OpenAIRE

    Belguzar Kara; Senay Uzun; Mehmet Yokusoglu; Mehmet Uzun

    2009-01-01

    AIM: The aim of this study was to determine the effect of medicine knowledge on the methods applied for lowering blood pressure among patients with hypertension. METHODS: This cross-sectional study was conducted between February 1 and April 30, 2006. The sample of the study was constituted by 77 patients who had admitted to Gulhane Military Medical Academy Cardiology Outpatient Clinic with the diagnosis of hypertension. The data were collected by using a questionnaire designed by the investig...

  7. Least square methods and covariance matrix applied to the relative efficiency calibration of a Ge(Li) detector

    International Nuclear Information System (INIS)

    Geraldo, L.P.; Smith, D.L.

    1989-01-01

    The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt

  8. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  9. Arctic action against climatic changes

    International Nuclear Information System (INIS)

    Njaastad, Birgit

    2000-01-01

    The articles describes efforts to map the climatic changes in the Arctic regions through the Arctic Climate Impact Assessment Project which is a joint venture between eight Arctic countries: Denmark, Canada, the USA, Russia, Finland, Sweden and Norway. The project deals with the consequences of the changes such as the UV radiation due to diminishing ozone layers. The aims are: Evaluate and integrate existing knowledge in the field and evaluate and predict the consequences particularly on the environment both in the present and the future and produce reliable and useful information in order to aid the decision-making processes

  10. Development of safety evaluation methods applied to the safety regulations for the operation stage of fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The purposes of this study is to establish the safety evaluation methods needed in the operation stage of fast breeder reactor (FBR). Related the above purpose of this study, some investigation studies for the technical standard applied to Monju were achieved in JFY 2012. (author)

  11. Aiming for the Singing Teacher: An Applied Study on Preservice Kindergarten Teachers' Singing Skills Development within a Music Methods Course

    Science.gov (United States)

    Neokleous, Rania

    2015-01-01

    This study examined the effects of a music methods course offered at a Cypriot university on the singing skills of 33 female preservice kindergarten teachers. To systematically measure and analyze student progress, the research design was both experimental and descriptive. As an applied study which was carried out "in situ," the normal…

  12. A Method for Evaluation and Comparison of Parallel Robots for Safe Human Interaction, Applied to Robotic TMS

    NARCIS (Netherlands)

    de Jong, Jan Johannes; Stienen, Arno; van der Wijk, V.; Wessels, Martijn; van der Kooij, Herman

    2012-01-01

    Transcranial magnetic stimulation (TMS) is a noninvasive method to modify behaviour of neurons in the brain. TMS is applied by running large currents through a coil close to the scalp. For consistent results it is required to maintain the coil position within millimetres of the targeted location,

  13. GLYCOHEMOGLOBIN - COMPARISON OF 12 ANALYTICAL METHODS, APPLIED TO LYOPHILIZED HEMOLYSATES BY 101 LABORATORIES IN AN EXTERNAL QUALITY ASSURANCE PROGRAM

    NARCIS (Netherlands)

    WEYKAMP, CW; PENDERS, TJ; MUSKIET, FAJ; VANDERSLIK, W

    Stable lyophilized ethylenediaminetetra-acetic acid (EDTA)-blood haemolysates were applied in an external quality assurance programme (SKZL, The Netherlands) for glycohaemoglobin assays in 101 laboratories using 12 methods. The mean intralaboratory day-to-day coefficient of variation (CV),

  14. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  15. A non overlapping parallel domain decomposition method applied to the simplified transport equations

    International Nuclear Information System (INIS)

    Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.

    2009-01-01

    A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)

  16. A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems

    Science.gov (United States)

    Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.

    2017-05-01

    Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

  17. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples.

    Science.gov (United States)

    Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K

    2015-12-01

    There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. © 2015 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc.

  18. The Contribution to Arctic Climate Change from Countries in the Arctic Council

    Science.gov (United States)

    Schultz, T.; MacCracken, M. C.

    2013-12-01

    The conventional accounting frameworks for greenhouse gas (GHG) emissions used today, established under the Kyoto Protocol 25 years ago, exclude short lived climate pollutants (SLCPs), and do not include regional effects on the climate. However, advances in climate science now suggest that mitigation of SLCPs can reduce up to 50% of global warming by 2050. It has also become apparent that regions such as the Arctic have experienced a much greater degree of anthropogenic warming than the globe as a whole, and that efforts to slow this warming could benefit the larger effort to slow climate change around the globe. A draft standard for life cycle assessment (LCA), LEO-SCS-002, being developed under the American National Standards Institute process, has integrated the most recent climate science into a unified framework to account for emissions of all radiatively significant GHGs and SLCPs. This framework recognizes four distinct impacts to the oceans and climate caused by GHGs and SLCPs: Global Climate Change; Arctic Climate Change; Ocean Acidification; and Ocean Warming. The accounting for Arctic Climate Change, the subject of this poster, is based upon the Absolute Regional Temperature Potential, which considers the incremental change to the Arctic surface temperature resulting from an emission of a GHG or SLCP. Results are evaluated using units of mass of carbon dioxide equivalent (CO2e), which can be used by a broad array of stakeholders, including scientists, consumers, policy makers, and NGOs. This poster considers the contribution to Arctic Climate Change from emissions of GHGs and SLCPs from the eight member countries of the Arctic Council; the United States, Canada, Russia, Denmark, Finland, Iceland, Norway, and Sweden. Of this group of countries, the United States was the largest contributor to Arctic Climate Change in 2011, emitting 9600 MMT CO2e. This includes a gross warming of 11200 MMT CO2e (caused by GHGs, black and brown carbon, and warming effects

  19. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  20. Analysis of Steel Wire Rope Diagnostic Data Applying Multi-Criteria Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2018-02-01

    Full Text Available Steel ropes are complex flexible structures used in many technical applications, such as elevators, cable cars, and funicular cabs. Due to the specific design and critical safety requirements, diagnostics of ropes remains an important issue. Broken wire number in the steel ropes is limited by safety standards when they are used in the human lifting and carrying installations. There are some practical issues on loose wires—firstly, it shows end of lifetime of the entire rope, independently of wear, lubrication or wrong winding on the drums or through pulleys; and, secondly, it can stick in the tight pulley—support gaps and cause deterioration of rope structure up to birdcage formations. Normal rope operation should not generate broken wires, so increasing of their number shows a need for rope installation maintenance. This paper presents a methodology of steel rope diagnostics and the results of analysis using multi-criteria analysis methods. The experimental part of the research was performed using an original test bench to detect broken wires on the rope surface by its vibrations. Diagnostics was performed in the range of frequencies from 60 to 560 Hz with a pitch of 50 Hz. The obtained amplitudes of the broken rope wire vibrations, different from the entire rope surface vibration parameters, was the significant outcome. Later analysis of the obtained experimental results revealed the most significant values of the diagnostic parameters. The evaluation of the power of the diagnostics was implemented by using multi-criteria decision-making (MCDM methods. Various decision-making methods are necessary due to unknown efficiencies with respect to the physical phenomena of the evaluated processes. The significance of the methods was evaluated using objective methods from the structure of the presented data. Some of these methods were proposed by authors of this paper. Implementation of MCDM in diagnostic data analysis and definition of the

  1. Methods of economic analysis applied to fusion research. Fourth annual report

    International Nuclear Information System (INIS)

    Hazelrigg, G.A. Jr.

    1980-01-01

    The current study reported here has involved three separate tasks. The first task deals with the development of expected utility analysis techniques for economic evaluation of fusion research. A decision analytic model is developed for the incorporation of market uncertainties, as well as technological uncertainties in an economic evaluation of long-range energy research. The model is applied to the case of fusion research. The second task deals with the potential effects of long-range energy RD and D on fossil fuel prices. ECON's previous fossil fuel price model is extended to incorporate a dynamic demand function. The dynamic demand function supports price fluctuations such as those observed in the marketplace. The third task examines alternative uses of fusion technologies, specifically superconducting technologies and first wall materials to determine the potential for alternative, nonfusion use of these technologies. In both cases, numerous alternative uses are found

  2. Microbeam high-resolution diffraction and x-ray standing wave methods applied to semiconductor structures

    International Nuclear Information System (INIS)

    Kazimirov, A; Bilderback, D H; Huang, R; Sirenko, A; Ougazzaden, A

    2004-01-01

    A new approach to conditioning x-ray microbeams for high angular resolution x-ray diffraction and scattering techniques is introduced. We combined focusing optics (one-bounce imaging capillary) and post-focusing collimating optics (miniature Si(004) channel-cut crystal) to generate an x-ray microbeam with a size of 10 μm and ultimate angular resolution of 14 μrad. The microbeam was used to analyse the strain in sub-micron thick InGaAsP epitaxial layers grown on an InP(100) substrate by the selective area growth technique in narrow openings between the oxide stripes. For the structures for which the diffraction peaks from the substrate and the film overlap, the x-ray standing wave technique was applied for precise measurements of the strain with a Δd/d resolution of better than 10 -4 . (rapid communication)

  3. Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.

    Science.gov (United States)

    Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J

    2005-01-01

    A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.

  4. Lagrange polynomial interpolation method applied in the calculation of the J({xi},{beta}) function

    Energy Technology Data Exchange (ETDEWEB)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis, RJ (Brazil)]. E-mails: munhoz.vf@gmail.com; dpalma@cefeteq.br; Martinez, Aquilino Senra [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE) (COPPE). Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br

    2008-07-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  5. Lagrange polynomial interpolation method applied in the calculation of the J(ξ,β) function

    International Nuclear Information System (INIS)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro; Martinez, Aquilino Senra

    2008-01-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  6. Applying the chronicle workshop as a method for evaluating participatory interventions

    DEFF Research Database (Denmark)

    Poulsen, Signe; Ipsen, Christine; Gish, Liv

    2015-01-01

    Despite the growing interest for process evaluation in participatory interventions, studies examining specific methods for process evaluation are lacking. In this paper, we propose a new method for process evaluation – the chronicle workshop. The chronicle workshop has not previously been used...... productivity and well-being. In all cases, we saw that the chronicle workshop gave valuable information about the intervention process and that it initiated a joint reflection among participants from different departments. The chronicle workshop makes it possible to better understand the results...

  7. Current Methods Applied to Biomaterials - Characterization Approaches, Safety Assessment and Biological International Standards.

    Science.gov (United States)

    Oliveira, Justine P R; Ortiz, H Ivan Melendez; Bucio, Emilio; Alves, Patricia Terra; Lima, Mayara Ingrid Sousa; Goulart, Luiz Ricardo; Mathor, Monica B; Varca, Gustavo H C; Lugao, Ademar B

    2018-04-10

    Safety and biocompatibility assessment of biomaterials are themes of constant concern as advanced materials enter the market as well as products manufactured by new techniques emerge. Within this context, this review provides an up-to-date approach on current methods for the characterization and safety assessment of biomaterials and biomedical devices from a physicalchemical to a biological perspective, including a description of the alternative methods in accordance with current and established international standards. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. A combined evidence Bayesian method for human ancestry inference applied to Afro-Colombians.

    Science.gov (United States)

    Rishishwar, Lavanya; Conley, Andrew B; Vidakovic, Brani; Jordan, I King

    2015-12-15

    Uniparental genetic markers, mitochondrial DNA (mtDNA) and Y chromosomal DNA, are widely used for the inference of human ancestry. However, the resolution of ancestral origins based on mtDNA haplotypes is limited by the fact that such haplotypes are often found to be distributed across wide geographical regions. We have addressed this issue here by combining two sources of ancestry information that have typically been considered separately: historical records regarding population origins and genetic information on mtDNA haplotypes. To combine these distinct data sources, we applied a Bayesian approach that considers historical records, in the form of prior probabilities, together with data on the geographical distribution of mtDNA haplotypes, formulated as likelihoods, to yield ancestry assignments from posterior probabilities. This combined evidence Bayesian approach to ancestry assignment was evaluated for its ability to accurately assign sub-continental African ancestral origins to Afro-Colombians based on their mtDNA haplotypes. We demonstrate that the incorporation of historical prior probabilities via this analytical framework can provide for substantially increased resolution in sub-continental African ancestry assignment for members of this population. In addition, a personalized approach to ancestry assignment that involves the tuning of priors to individual mtDNA haplotypes yields even greater resolution for individual ancestry assignment. Despite the fact that Colombia has a large population of Afro-descendants, the ancestry of this community has been understudied relative to populations with primarily European and Native American ancestry. Thus, the application of the kind of combined evidence approach developed here to the study of ancestry in the Afro-Colombian population has the potential to be impactful. The formal Bayesian analytical framework we propose for combining historical and genetic information also has the potential to be widely applied

  9. Method to characterize directional changes in Arctic sea ice drift and associated deformation due to synoptic atmospheric variations using Lagrangian dispersion statistics

    Directory of Open Access Journals (Sweden)

    J. V. Lukovich

    2017-07-01

    Full Text Available A framework is developed to assess the directional changes in sea ice drift paths and associated deformation processes in response to atmospheric forcing. The framework is based on Lagrangian statistical analyses leveraging particle dispersion theory which tells us whether ice drift is in a subdiffusive, diffusive, ballistic, or superdiffusive dynamical regime using single-particle (absolute dispersion statistics. In terms of sea ice deformation, the framework uses two- and three-particle dispersion to characterize along- and across-shear transport as well as differential kinematic parameters. The approach is tested with GPS beacons deployed in triplets on sea ice in the southern Beaufort Sea at varying distances from the coastline in fall of 2009 with eight individual events characterized. One transition in particular follows the sea level pressure (SLP high on 8 October in 2009 while the sea ice drift was in a superdiffusive dynamic regime. In this case, the dispersion scaling exponent (which is a slope between single-particle absolute dispersion of sea ice drift and elapsed time changed from superdiffusive (α ∼ 3 to ballistic (α ∼ 2 as the SLP was rounding its maximum pressure value. Following this shift between regimes, there was a loss in synchronicity between sea ice drift and atmospheric motion patterns. While this is only one case study, the outcomes suggest similar studies be conducted on more buoy arrays to test momentum transfer linkages between storms and sea ice responses as a function of dispersion regime states using scaling exponents. The tools and framework developed in this study provide a unique characterization technique to evaluate these states with respect to sea ice processes in general. Application of these techniques can aid ice hazard assessments and weather forecasting in support of marine transportation and indigenous use of near-shore Arctic areas.

  10. Climate-derived tensions in Arctic security.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Strickland, James Hassler

    2008-09-01

    Globally, there is no lack of security threats. Many of them demand priority engagement and there can never be adequate resources to address all threats. In this context, climate is just another aspect of global security and the Arctic just another region. In light of physical and budgetary constraints, new security needs must be integrated and prioritized with existing ones. This discussion approaches the security impacts of climate from that perspective, starting with the broad security picture and establishing how climate may affect it. This method provides a different view from one that starts with climate and projects it, in isolation, as the source of a hypothetical security burden. That said, the Arctic does appear to present high-priority security challenges. Uncertainty in the timing of an ice-free Arctic affects how quickly it will become a security priority. Uncertainty in the emergent extreme and variable weather conditions will determine the difficulty (cost) of maintaining adequate security (order) in the area. The resolution of sovereignty boundaries affects the ability to enforce security measures, and the U.S. will most probably need a military presence to back-up negotiated sovereignty agreements. Without additional global warming, technology already allows the Arctic to become a strategic link in the global supply chain, possibly with northern Russia as its main hub. Additionally, the multinational corporations reaping the economic bounty may affect security tensions more than nation-states themselves. Countries will depend ever more heavily on the global supply chains. China has particular needs to protect its trade flows. In matters of security, nation-state and multinational-corporate interests will become heavily intertwined.

  11. Squaring the Arctic Circle: connecting Arctic knowledge with societal needs

    Science.gov (United States)

    Wilkinson, J.

    2017-12-01

    Over the coming years the landscape of the Arctic will change substantially- environmentally, politically, and economically. Furthermore, Arctic change has the potential to significantly impact Arctic and non-Arctic countries alike. Thus, our science is in-demand by local communities, politicians, industry leaders and the public. During these times of transition it is essential that the links between science and society be strengthened further. Strong links between science and society is exactly what is needed for the development of better decision-making tools to support sustainable development, enable adaptation to climate change, provide the information necessary for improved management of assets and operations in the Arctic region, and and to inform scientific, economic, environmental and societal policies. By doing so tangible benefits will flow to Arctic societies, as well as for non-Arctic countries that will be significantly affected by climate change. Past experience has shown that the engagement with a broad range of stakeholders is not always an easy process. Consequently, we need to improve collaborative opportunities between scientists, indigenous/local communities, private sector, policy makers, NGOs, and other relevant stakeholders. The development of best practices in this area must build on the collective experiences of successful cross-sectorial programmes. Within this session we present some of the outreach work we have performed within the EU programme ICE-ARC, from community meetings in NW Greenland through to sessions at the United Nations Framework Convention on Climate Change COP Conferences, industry round tables, and an Arctic side event at the World Economic Forum in Davos.

  12. Arctic sea-ice syntheses: Charting across scope, scale, and knowledge systems

    Science.gov (United States)

    Druckenmiller, M. L.; Perovich, D. K.; Francis, J. A.

    2017-12-01

    Arctic sea ice supports and intersects a multitude of societal benefit areas, including regulating regional and global climates, structuring marine food webs, providing for traditional food provisioning by indigenous peoples, and constraining marine shipping and access. At the same time, sea ice is one of the most rapidly changing elements of the Arctic environment and serves as a source of key physical indicators for monitoring Arctic change. Before the present scientific interest in Arctic sea ice for climate research, it has long been, and remains, a focus of applied research for industry and national security. For generations, the icy coastal seas of the North have also provided a basis for the sharing of local and indigenous knowledge between Arctic residents and researchers, including anthropologists, biologists, and geoscientists. This presentation will summarize an ongoing review of existing synthesis studies of Arctic sea ice. We will chart efforts to achieve system-level understanding across geography, temporal scales, and the ecosystem services that Arctic sea ice supports. In doing so, we aim to illuminate the role of interdisciplinary science, together with local and indigenous experts, in advancing knowledge of the roles of sea ice in the Arctic system and beyond, reveal the historical and scientific evolution of sea-ice research, and assess current gaps in system-scale understanding.

  13. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Science.gov (United States)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  14. A Guide on Spectral Methods Applied to Discrete Data in One Dimension

    Directory of Open Access Journals (Sweden)

    Martin Seilmayer

    2017-01-01

    Full Text Available This paper provides an overview about the usage of the Fourier transform and its related methods and focuses on the subtleties to which the users must pay attention. Typical questions, which are often addressed to the data, will be discussed. Such a problem can be the origin of frequency or band limitation of the signal or the source of artifacts, when a Fourier transform is carried out. Another topic is the processing of fragmented data. Here, the Lomb-Scargle method will be explained with an illustrative example to deal with this special type of signal. Furthermore, the time-dependent spectral analysis, with which one can evaluate the point in time when a certain frequency appears in the signal, is of interest. The goal of this paper is to collect the important information about the common methods to give the reader a guide on how to use these for application on one-dimensional data. The introduced methods are supported by the spectral package, which has been published for the statistical environment R prior to this article.

  15. Sustainability Assessment of Power Generation Systems by Applying Exergy Analysis and LCA Methods

    NARCIS (Netherlands)

    Stougie, L.; van der Kooi, H.J.; Valero Delgado, A.

    2015-01-01

    The selection of power generation systems is important when striving for a more sustainable society. However, the results of environmental, economic and social sustainability assessments are subject to new insights into the calculation methods and to changing needs, economic conditions and societal

  16. Morphological and chemical changes of dentin after applying different sterilization methods

    Directory of Open Access Journals (Sweden)

    Cláudio Antonio Talge Carvalho

    Full Text Available Aim The present study evaluated the morphological and chemical changes of dentin produced by different sterilization methods, using scanning electron microscopy (SEM and energy-dispersive X-ray spectrometry (EDS analysis. Material and method Five human teeth were sectioned into 4 samples, each divided into 3 specimens. The specimens were separated into sterilization groups, as follows: wet heat under pressure; cobalt 60 gamma radiation; and control (without sterilization. After sterilization, the 60 specimens were analyzed by SEM under 3 magnifications: 1500X, 5000X, and 10000X. The images were analyzed by 3 calibrated examiners, who assigned scores according to the changes observed in the dentinal tubules: 0 = no morphological change; 1, 2 and 3 = slight, medium and complete obliteration of the dentinal tubules. The chemical composition of dentin was assessed by EDS, with 15 kV incidence and 1 μm penetration. Result The data obtained were submitted to the statistical tests of Kruskall-Wallis and ANOVA. It was observed that both sterilization methods – with autoclave and with cobalt 60 gamma radiation – produced no significant changes to the morphology of the dentinal tubules or to the chemical composition of dentin. Conclusion Both methods may thus be used to sterilize teeth for research conducted in vitro.

  17. Model-based acoustic substitution source methods for assessing shielding measures applied to trains

    NARCIS (Netherlands)

    Geerlings, A.C.; Thompson, D.J.; Verheij, J.W.

    2001-01-01

    A promising means of reducing the rolling noise from trains is local shielding in the form of vehicle-mounted shrouds combined with low trackside barriers. This is much less visually intrusive than classic lineside noise barriers. Various experimental methods have been proposed that allow the

  18. Systematic Convergence in Applying Variational Method to Double-Well Potential

    Science.gov (United States)

    Mei, Wai-Ning

    2016-01-01

    In this work, we demonstrate the application of the variational method by computing the ground- and first-excited state energies of a double-well potential. We start with the proper choice of the trial wave functions using optimized parameters, and notice that accurate expectation values in excellent agreement with the numerical results can be…

  19. GOCE in ocean modelling - Point mass method applied on GOCE gravity gradients

    DEFF Research Database (Denmark)

    Herceg, Matija; Knudsen, Per

    This presentation is an introduction to my Ph.D project. The main objective of the study is to improve the methodology for combining GOCE gravity field models with satellite altimetry to derive optimal dynamic ocean topography models for oceanography. Here a method for geoid determination using...

  20. Applying the dynamic cone penetrometer (DCP) design method to low volume roads

    CSIR Research Space (South Africa)

    Paige-Green, P

    2011-07-01

    Full Text Available The Dynamic Cone Penetrometer (DCP) has been in use since the 1950s for various applications in pavement investigation. During the 1980s, Kleyn and Van Zyl described a method for upgrading unsealed roads to light sealed road standard based...