WorldWideScience

Sample records for arctic applied methods

  1. Benthic microalgal production in the Arctic: Applied methods and status of the current database

    DEFF Research Database (Denmark)

    Glud, Ronnie Nøhr; Woelfel, Jana; Karsten, Ulf

    2009-01-01

    the often very confusing terminology in the existing literature. Our compilation demonstrates that i) benthic microalgae contribute significantly to coastal ecosystem production in the Arctic, and ii) benthic microalgal production on average exceeds pelagic productivity by a factor of 1.5 for water depths......The current database on benthic microalgal production in Arctic waters comprises 10 peer-reviewed and three unpublished studies. Here, we compile and discuss these datasets, along with the applied measurement approaches used. The latter is essential for robust comparative analysis and to clarify...... down to 30 m. We have established relationships between irradiance, water depth and benthic microalgal productivity that can be used to extrapolate results from quantitative experimental studies to the entire Arctic region. Two different approaches estimated that current benthic microalgal production...

  2. Participatory Methods in Arctic Research

    DEFF Research Database (Denmark)

    Faber, Louise

    2018-01-01

    This book is a collection of articles written by researchers at Aalborg University, affiliated with AAU Arctic. The articles are about how the researchers in their respective projects work with stakeholders and citizens in different ways, for example in connection with problem formulation, data c...

  3. Surface related multiple elimination (SRME) and radon transform forward multiple modeling methods applied to 2D multi-channel seismic profiles from the Chukchi Shelf, Arctic Ocean

    Science.gov (United States)

    Ilhan, I.; Coakley, B. J.

    2013-12-01

    The Chukchi Edges project was designed to establish the relationship between the Chukchi Shelf and Borderland and indirectly test theories of opening for the Canada Basin. During this cruise, ~5300 km of 2D multi-channel reflection seismic profiles and other geophysical data (swath bathymetry, gravity, magnetics, sonobuoy refraction seismic) were collected from the RV Marcus G. Langseth across the transition between the Chukchi Shelf and Chukchi Borderland, where the water depths vary from 30 m to over 3 km. Multiples occur when seismic energy is trapped in a layer and reflected from an acoustic interface more than once. Various kinds of multiples occur during seismic data acquisition. These depend on the ray-path the seismic energy follows through the layers. One of the most common multiples is the surface related multiple, which occurs due to strong acoustic impedance contrast between the air and water. The reflected seismic energy from the water surface is trapped within the water column, thus reflects from the seafloor multiple times. Multiples overprint the primary reflections and complicate data interpretation. Both surface related multiple elimination (SRME) and forward parabolic radon transform multiple modeling methods were necessary to attenuate the multiples. SRME is applied to shot gathers starting with the near offset interpolation, multiple estimation using water depths, and subtracting the model multiple from the shot gathers. This method attenuated surface related multiple energy, however, peg-leg multiples remained in the data. The parabolic radon transform method minimized the effect of these multiples. This method is applied to normal moveout (NMO) corrected common mid-point gathers (CMP). The CMP gathers are fitted or modeled with curves estimated from the reference offset, moveout range, moveout increment parameters. Then, the modeled multiples are subtracted from the data. Preliminary outputs of these two methods show that the surface related

  4. Applied nonparametric statistical methods

    CERN Document Server

    Sprent, Peter

    2007-01-01

    While preserving the clear, accessible style of previous editions, Applied Nonparametric Statistical Methods, Fourth Edition reflects the latest developments in computer-intensive methods that deal with intractable analytical problems and unwieldy data sets. Reorganized and with additional material, this edition begins with a brief summary of some relevant general statistical concepts and an introduction to basic ideas of nonparametric or distribution-free methods. Designed experiments, including those with factorial treatment structures, are now the focus of an entire chapter. The text also e

  5. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  6. Methods of applied mathematics

    CERN Document Server

    Hildebrand, Francis B

    1992-01-01

    This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.

  7. Games in the Arctic: applying game theory insights to Arctic challenges

    Directory of Open Access Journals (Sweden)

    Scott Cole

    2014-08-01

    Full Text Available We illustrate the benefits of game theoretic analysis for assisting decision-makers in resolving conflicts and other challenges in a rapidly evolving region. We review a series of salient Arctic issues with global implications—managing open-access fisheries, opening Arctic areas for resource extraction and ensuring effective environmental regulation for natural resource extraction—and provide insights to help reach socially preferred outcomes. We provide an overview of game theoretic analysis in layman's terms, explaining how game theory can help researchers and decision-makers to better understand conflicts, and how to identify the need for, and improve the design of, policy interventions. We believe that game theoretic tools are particularly useful in a region with a diverse set of players ranging from countries to firms to individuals. We argue that the Arctic Council should take a more active governing role in the region by, for example, dispersing information to “players” in order to alleviate conflicts regarding the management of common-pool resources such as open-access fisheries and natural resource extraction. We also identify side payments—that is, monetary or in-kind compensation from one party of a conflict to another—as a key mechanism for reaching a more biologically, culturally and economically sustainable Arctic future. By emphasizing the practical insights generated from an academic discipline, we present game theory as an influential tool in shaping the future of the Arctic—for individual researchers, for inter-disciplinary research and for policy-makers themselves.

  8. SCIENTIFIC АND METHODICAL SUPPORT OF THE ARCTIC NATIONAL ATLAS

    Directory of Open Access Journals (Sweden)

    S. A. Dobrolyubov

    2017-01-01

    Full Text Available In recent years many maps, atlases and other cartographic products have been out in Russia and other countries. They reflect natural conditions and socio-economic processes in the Arctic region. “The Arctic Ocean Atlas”, “World Atlas of Snow and Ice Resources” and others made in accordance with traditions of Russian cartography school. The Arctic region maps are given special position in the Russian National Atlas. The Arctic National Atlas is going to be published in 2017. The Atlas presents the most relevant information on the nature, economy, population and cultural heritage, history of development, strategic management and growth forecasts for the Russian Arctic. Extended electronic version is planned to be produced in addition to the analogue one. Scientific and methodical support of the Arctic National Atlas has vital organizational, systematizing and technological importance for the Atlas as a complex cartographic product. The book “The Arctic National Atlas: Experience, Texts and Map Legends” includes descriptions of scientific concepts and techniques of maps design, original map legends with English translation, extended annotations to maps in English. This will make it possible to inform the world scientific community about the particular features of the Russian Arctic development. The atlas and the book are unique because the national atlases, published before, have highlighted natural characteristics and conditions of the Arctic exploration mainly. The Atlas can be classified as a classic complex atlas, with a full representation of the natural, historical, cultural, socio-economic, environmental and geopolitical features of the Russian Arctic. The book expands the prospects for comprehensive study of the Russian Arctic region.

  9. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....

  10. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays ...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...... development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...

  11. Bayesian methods applied to GWAS.

    Science.gov (United States)

    Fernando, Rohan L; Garrick, Dorian

    2013-01-01

    Bayesian multiple-regression methods are being successfully used for genomic prediction and selection. These regression models simultaneously fit many more markers than the number of observations available for the analysis. Thus, the Bayes theorem is used to combine prior beliefs of marker effects, which are expressed in terms of prior distributions, with information from data for inference. Often, the analyses are too complex for closed-form solutions and Markov chain Monte Carlo (MCMC) sampling is used to draw inferences from posterior distributions. This chapter describes how these Bayesian multiple-regression analyses can be used for GWAS. In most GWAS, false positives are controlled by limiting the genome-wise error rate, which is the probability of one or more false-positive results, to a small value. As the number of test in GWAS is very large, this results in very low power. Here we show how in Bayesian GWAS false positives can be controlled by limiting the proportion of false-positive results among all positives to some small value. The advantage of this approach is that the power of detecting associations is not inversely related to the number of markers.

  12. Application of a Regional Thermohaline Inverse Method to observational reanalyses in an Arctic domain

    Science.gov (United States)

    Mackay, Neill; Wilson, Chris; Zika, Jan

    2017-04-01

    The Overturning in the Subpolar North Atlantic Program (OSNAP) aims to quantify the subpolar AMOC and its variability, including associated fluxes of heat and freshwater, using a combination of observations and models. In contribution OSNAP, we have developed a novel inverse method that diagnoses the interior mixing and advective flux at the boundary of an enclosed volume in the ocean. This Regional Thermohaline Inverse Method (RTHIM) operates in salinity-temperature (S-T) coordinates, a framework which allows us to gain insights into water mass transformation within the control volume and boundary fluxes of heat and freshwater. RTHIM will use multiple long-term observational datasets and reanalyses, including Argo, to provide a set of inverse estimates to be used to understand the sub-annual transport timescales sampled by the OSNAP array. Having validated the method using the NEMO model, we apply RTHIM to an Arctic domain using temperature and salinity and surface flux data from reanalyses. We also use AVISO surface absolute geostrophic velocities which, combined with thermal wind balance, provide an initial estimate for the inflow and outflow through the boundary. We diagnose the interior mixing in S-T coordinates and the boundary flow, calculating the transformation rates of well-known water masses and the individual contributions to these rates from surface flux processes, boundary flow and interior mixing. Outputs from RTHIM are compared with similar metrics from previous literature on the region. The inverse solution reproduces an observed pattern of warm, saline Atlantic waters entering the Arctic volume and cooler, fresher waters leaving. Meanwhile, surface fluxes act to create waters at the extremes of the S-T distribution and interior mixing acts in opposition, creating water masses at intermediate S-T and destroying them at the extremes. RTHIM has the potential to be compared directly with the OSNAP array observations by defining a domain boundary which

  13. H-methods in applied sciences

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2008-01-01

    The author has developed a framework for mathematical modelling within applied sciences. It is characteristic for data from 'nature and industry' that they have reduced rank for inference. It means that full rank solutions normally do not give satisfactory solutions. The basic idea of H...... with finding a balance between the estimation task and the prediction task. The name H-methods has been chosen because of close analogy with the Heisenberg uncertainty inequality. A similar situation is present in modelling data. The mathematical modelling stops, when the prediction aspect of the model cannot...... be improved. H-methods have been applied to wide range of fields within applied sciences. In each case, the H-methods provide with superior solutions compared to the traditional ones. A background for the H-methods is presented. The H-principle of mathematical modelling is explained. It is shown how...

  14. [Montessori method applied to dementia - literature review].

    Science.gov (United States)

    Brandão, Daniela Filipa Soares; Martín, José Ignacio

    2012-06-01

    The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.

  15. A comparison of tracking methods for extreme cyclones in the Arctic basin

    Directory of Open Access Journals (Sweden)

    Ian Simmonds

    2014-09-01

    Full Text Available Dramatic climate changes have occurred in recent decades over the Arctic region, and very noticeably in near-surface warming and reductions in sea ice extent. In a climatological sense, Arctic cyclone behaviour is linked to the distributions of lower troposphere temperature and sea ice, and hence the monitoring of storms can be seen as an important component of the analysis of Arctic climate. The analysis of cyclone behaviour, however, is not without ambiguity, and different cyclone identification algorithms can lead to divergent conclusions. Here we analyse a subset of Arctic cyclones with 10 state-of-the-art cyclone identification schemes applied to the ERA-Interim reanalysis. The subset is comprised of the five most intense (defined in terms of central pressure Arctic cyclones for each of the 12 calendar months over the 30-yr period from 1 January 1979 to 31 March 2009. There is a considerable difference between the central pressures diagnosed by the algorithms of typically 5–10 hPa. By contrast, there is substantial agreement as to the location of the centre of these extreme storms. The cyclone tracking algorithms also display some differences in the evolution and life cycle of these storms, while overall finding them to be quite long-lived. For all but six of the 60 storms an intense tropopause polar vortex is identified within 555 km of the surface system. The results presented here highlight some significant differences between the outputs of the algorithms, and hence point to the value using multiple identification schemes in the study of cyclone behaviour. Overall, however, the algorithms reached a very robust consensus on most aspects of the behaviour of these very extreme cyclones in the Arctic basin.

  16. Arctic Risk Management (ARMNet) Network: Linking Risk Management Practitioners and Researchers Across the Arctic Regions of Canada and Alaska To Improve Risk, Emergency and Disaster Preparedness and Mitigation Through Comparative Analysis and Applied Research

    Science.gov (United States)

    Garland, A.

    2015-12-01

    The Arctic Risk Management Network (ARMNet) was conceived as a trans-disciplinary hub to encourage and facilitate greater cooperation, communication and exchange among American and Canadian academics and practitioners actively engaged in the research, management and mitigation of risks, emergencies and disasters in the Arctic regions. Its aim is to assist regional decision-makers through the sharing of applied research and best practices and to support greater inter-operability and bilateral collaboration through improved networking, joint exercises, workshops, teleconferences, radio programs, and virtual communications (eg. webinars). Most importantly, ARMNet is a clearinghouse for all information related to the management of the frequent hazards of Arctic climate and geography in North America, including new and emerging challenges arising from climate change, increased maritime polar traffic and expanding economic development in the region. ARMNet is an outcome of the Arctic Observing Network (AON) for Long Term Observations, Governance, and Management Discussions, www.arcus.org/search-program. The AON goals continue with CRIOS (www.ariesnonprofit.com/ARIESprojects.php) and coastal erosion research (www.ariesnonprofit.com/webinarCoastalErosion.php) led by the North Slope Borough Risk Management Office with assistance from ARIES (Applied Research in Environmental Sciences Nonprofit, Inc.). The constituency for ARMNet will include all northern academics and researchers, Arctic-based corporations, First Responders (FRs), Emergency Management Offices (EMOs) and Risk Management Offices (RMOs), military, Coast Guard, northern police forces, Search and Rescue (SAR) associations, boroughs, territories and communities throughout the Arctic. This presentation will be of interest to all those engaged in Arctic affairs, describe the genesis of ARMNet and present the results of stakeholder meetings and webinars designed to guide the next stages of the Project.

  17. Applied mathematical methods in nuclear thermal hydraulics

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1983-01-01

    Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated

  18. Entropy viscosity method applied to Euler equations

    International Nuclear Information System (INIS)

    Delchini, M. O.; Ragusa, J. C.; Berry, R. A.

    2013-01-01

    The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)

  19. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  20. Computational methods applied to wind tunnel optimization

    Science.gov (United States)

    Lindsay, David

    methods, coordinate transformation theorems and techniques including the Method of Jacobians, and a derivation of the fluid flow fundamentals required for the model. It applies the methods to study the effect of cross-section and fillet variation, and to obtain a sample design of a high-uniformity nozzle.

  1. Methods for measuring arctic and alpine shrub growth

    DEFF Research Database (Denmark)

    Myers-Smith, Isla; Hallinger, Martin; Blok, Daan

    2015-01-01

    dynamics in relation to environmental variables. Recent advances in sampling methods, analysis and applications have improved our ability to investigate growth and recruitment dynamics of shrubs. However, to extrapolate findings to the biome scale, future dendroecologicalwork will require improved...

  2. Monitoring Freeze Thaw Transitions in Arctic Soils using Complex Resistivity Method

    Science.gov (United States)

    Wu, Y.; Hubbard, S. S.; Ulrich, C.; Dafflon, B.; Wullschleger, S. D.

    2012-12-01

    The Arctic region, which is a sensitive system that has emerged as a focal point for climate change studies, is characterized by a large amount of stored carbon and a rapidly changing landscape. Seasonal freeze-thaw transitions in the Arctic alter subsurface biogeochemical processes that control greenhouse gas fluxes from the subsurface. Our ability to monitor freeze thaw cycles and associated biogeochemical transformations is critical to the development of process rich ecosystem models, which are in turn important for gaining a predictive understanding of Arctic terrestrial system evolution and feedbacks with climate. In this study, we conducted both laboratory and field investigations to explore the use of the complex resistivity method to monitor freeze thaw transitions of arctic soil in Barrow, AK. In the lab studies, freeze thaw transitions were induced on soil samples having different average carbon content through exposing the arctic soil to temperature controlled environments at +4 oC and -20 oC. Complex resistivity and temperature measurements were collected using electrical and temperature sensors installed along the soil columns. During the laboratory experiments, resistivity gradually changed over two orders of magnitude as the temperature was increased or decreased between -20 oC and 0 oC. Electrical phase responses at 1 Hz showed a dramatic and immediate response to the onset of freeze and thaw. Unlike the resistivity response, the phase response was found to be exclusively related to unfrozen water in the soil matrix, suggesting that this geophysical attribute can be used as a proxy for the monitoring of the onset and progression of the freeze-thaw transitions. Spectral electrical responses contained additional information about the controls of soil grain size distribution on the freeze thaw dynamics. Based on the demonstrated sensitivity of complex resistivity signals to the freeze thaw transitions, field complex resistivity data were collected over

  3. Reconstruction of Arctic surface temperature in past 100 years using DINEOF

    Science.gov (United States)

    Zhang, Qiyi; Huang, Jianbin; Luo, Yong

    2015-04-01

    Global annual mean surface temperature has not risen apparently since 1998, which is described as global warming hiatus in recent years. However, measuring of temperature variability in Arctic is difficult because of large gaps in coverage of Arctic region in most observed gridded datasets. Since Arctic has experienced a rapid temperature change in recent years that called polar amplification, and temperature risen in Arctic is faster than global mean, the unobserved temperature in central Arctic will result in cold bias in both global and Arctic temperature measurement compared with model simulations and reanalysis datasets. Moreover, some datasets that have complete coverage in Arctic but short temporal scale cannot show Arctic temperature variability for long time. Data Interpolating Empirical Orthogonal Function (DINEOF) were applied to fill the coverage gap of NASA's Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP 250km smooth) product in Arctic with IABP dataset which covers entire Arctic region between 1979 and 1998, and to reconstruct Arctic temperature in 1900-2012. This method provided temperature reconstruction in central Arctic and precise estimation of both global and Arctic temperature variability with a long temporal scale. Results have been verified by extra independent station records in Arctic by statistical analysis, such as variance and standard deviation. The result of reconstruction shows significant warming trend in Arctic in recent 30 years, as the temperature trend in Arctic since 1997 is 0.76°C per decade, compared with 0.48°C and 0.67°C per decade from 250km smooth and 1200km smooth of GISTEMP. And global temperature trend is two times greater after using DINEOF. The discrepancies above stress the importance of fully consideration of temperature variance in Arctic because gaps of coverage in Arctic cause apparent cold bias in temperature estimation. The result of global surface temperature also proves that

  4. Spherical Slepian as a new method for ionospheric modeling in arctic region

    Science.gov (United States)

    Etemadfard, Hossein; Hossainali, Masoud Mashhadi

    2016-03-01

    From the perspective of the physical, chemical and biological balance in the world, the Arctic has gradually turned into an important region opening ways for new researchers and scientific expeditions. In other words, various researches have been funded in order to study this frozen frontier in details. The current study can be seen in the same milieu where researchers intend to propose a set of new base functions for modeling ionospheric in the Arctic. As such, to optimize the Spherical Harmonic (SH) functions, the spatio-spectral concentration is applied here using the Slepian theory that was developed by Simons. For modeling the ionosphere, six International GNSS Service (IGS) stations located in the northern polar region were taken into account. Two other stations were left out for assessing the accuracy of the proposed model. The adopted GPS data starts at DOY 69 (Day of Year) and ends at DOY 83 (totally 15 successive days) in 2013. Three Spherical Slepian models respectively with the maximal degrees of K=15, 20 & 25 were used. Based on the results, K=15 is the optimum degree for the proposed model. The accuracy and precision of the Slepian model are about 0.1 and 0.05 TECU, respectively (TEC Unit=1016 electron/m2). To understand the advantage of this model, it is compared with polynomial and trigonometric series which are developed using the same set of measurements. The accuracy and precision of trigonometric and polynomial models are at least 4 times worse than the Slepian one.

  5. Generalized reciprocal method applied in processing seismic ...

    African Journals Online (AJOL)

    A geophysical investigation was carried out at Shika, near Zaria, using seismic refraction method; with the aim of analyzing the data obtained using the generalized reciprocal method (GRM). The technique is for delineating undulating refractors at any depth from in-line seismic refraction data consisting of forward and ...

  6. Applying scrum methods to ITS projects.

    Science.gov (United States)

    2017-08-01

    The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...

  7. Statistical classification methods applied to seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, F.M. [ed.; Anderson, D.N.; Anderson, K.K.; Hagedorn, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.

    1996-06-11

    To verify compliance with a Comprehensive Test Ban Treaty (CTBT), low energy seismic activity must be detected and discriminated. Monitoring small-scale activity will require regional (within {approx}2000 km) monitoring capabilities. This report provides background information on various statistical classification methods and discusses the relevance of each method in the CTBT seismic discrimination setting. Criteria for classification method selection are explained and examples are given to illustrate several key issues. This report describes in more detail the issues and analyses that were initially outlined in a poster presentation at a recent American Geophysical Union (AGU) meeting. Section 2 of this report describes both the CTBT seismic discrimination setting and the general statistical classification approach to this setting. Seismic data examples illustrate the importance of synergistically using multivariate data as well as the difficulties due to missing observations. Classification method selection criteria are presented and discussed in Section 3. These criteria are grouped into the broad classes of simplicity, robustness, applicability, and performance. Section 4 follows with a description of several statistical classification methods: linear discriminant analysis, quadratic discriminant analysis, variably regularized discriminant analysis, flexible discriminant analysis, logistic discriminant analysis, K-th Nearest Neighbor discrimination, kernel discrimination, and classification and regression tree discrimination. The advantages and disadvantages of these methods are summarized in Section 5.

  8. Applying Fuzzy Possibilistic Methods on Critical Objects

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2016-01-01

    Providing a flexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a specific cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...

  9. Tutte's barycenter method applied to isotopies

    NARCIS (Netherlands)

    de Verdiere, EC; Pocchiola, M; Vegter, G

    This paper is concerned with applications of Tutte's barycentric embedding theorem (Proc. London Math. Soc. 13 (1963) 743-768). It presents a method for building isotopies of triangulations in the plane, based on Tutte's theorem and the computation of equilibrium stresses of graphs by

  10. Spectral methods applied to Ising models

    International Nuclear Information System (INIS)

    DeFacio, B.; Hammer, C.L.; Shrauner, J.E.

    1980-01-01

    Several applications of Ising models are reviewed. A 2-d Ising model is studied, and the problem of describing an interface boundary in a 2-d Ising model is addressed. Spectral methods are used to formulate a soluble model for the surface tension of a many-Fermion system

  11. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  12. [The diagnostic methods applied in mycology].

    Science.gov (United States)

    Kurnatowska, Alicja; Kurnatowski, Piotr

    2008-01-01

    The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.

  13. Proteomics methods applied to malaria: Plasmodium falciparum

    International Nuclear Information System (INIS)

    Cuesta Astroz, Yesid; Segura Latorre, Cesar

    2012-01-01

    Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.

  14. METHOD OF APPLYING NICKEL COATINGS ON URANIUM

    Science.gov (United States)

    Gray, A.G.

    1959-07-14

    A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.

  15. Versatile Formal Methods Applied to Quantum Information.

    Energy Technology Data Exchange (ETDEWEB)

    Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.

  16. Reflections on Mixing Methods in Applied Linguistics Research

    Science.gov (United States)

    Hashemi, Mohammad R.

    2012-01-01

    This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…

  17. Applying Lidar and High-Resolution Multispectral Imagery for Improved Quantification and Mapping of Tundra Vegetation Structure and Distribution in the Alaskan Arctic

    Science.gov (United States)

    Greaves, Heather E.

    Climate change is disproportionately affecting high northern latitudes, and the extreme temperatures, remoteness, and sheer size of the Arctic tundra biome have always posed challenges that make application of remote sensing technology especially appropriate. Advances in high-resolution remote sensing continually improve our ability to measure characteristics of tundra vegetation communities, which have been difficult to characterize previously due to their low stature and their distribution in complex, heterogeneous patches across large landscapes. In this work, I apply terrestrial lidar, airborne lidar, and high-resolution airborne multispectral imagery to estimate tundra vegetation characteristics for a research area near Toolik Lake, Alaska. Initially, I explored methods for estimating shrub biomass from terrestrial lidar point clouds, finding that a canopy-volume based algorithm performed best. Although shrub biomass estimates derived from airborne lidar data were less accurate than those from terrestrial lidar data, algorithm parameters used to derive biomass estimates were similar for both datasets. Additionally, I found that airborne lidar-based shrub biomass estimates were just as accurate whether calibrated against terrestrial lidar data or harvested shrub biomass--suggesting that terrestrial lidar potentially could replace destructive biomass harvest. Along with smoothed Normalized Differenced Vegetation Index (NDVI) derived from airborne imagery, airborne lidar-derived canopy volume was an important predictor in a Random Forest model trained to estimate shrub biomass across the 12.5 km2 covered by our lidar and imagery data. The resulting 0.80 m resolution shrub biomass maps should provide important benchmarks for change detection in the Toolik area, especially as deciduous shrubs continue to expand in tundra regions. Finally, I applied 33 lidar- and imagery-derived predictor layers in a validated Random Forest modeling approach to map vegetation

  18. Gust factor based on research aircraft measurements: A new methodology applied to the Arctic marine boundary layer

    DEFF Research Database (Denmark)

    Suomi, Irene; Lüpkes, Christof; Hartmann, Jörg

    2016-01-01

    exceeding those at upper levels. Furthermore, we found gust factors to be strongly dependent on surface roughness conditions, which differed between the open ocean and sea ice in the Arctic marine environment. The roughness effect on the gust factor was stronger than the effect of boundary-layer stability....... at multiple levels over the same track. This is a significant advance, as gust measurements are usually limited to heights reached by weather masts. In unstable conditions over the open ocean, the gust factor was nearly constant with height throughout the boundary layer, the near-surface values only slightly...

  19. Convergence of Iterative Methods applied to Boussinesq equation

    Directory of Open Access Journals (Sweden)

    Sh. S. Behzadi

    2013-11-01

    Full Text Available In this paper, a Boussinesq equation is solved by using the Adomian's decomposition method, modified Adomian's decomposition method, variational iteration method, modified variational iteration method, homotopy perturbation method, modified homotopy perturbation method and homotopy analysis method. The approximate solution of this equation is calculated in the form of series which its components are computed by applying a recursive relation. The existence and uniqueness of the solution and the convergence of the proposed methods are proved. A numerical example is studied to demonstrate the accuracy of the presented methods.

  20. Method for Extracting Tidal and Inertial Motion from ARGOS Ice Buoys Applied to the Barents Sea during CEAREX

    National Research Council Canada - National Science Library

    Turet, P

    1993-01-01

    A harmonic analysis of tidal and inertial action was applied to observations of position of ARGOS buoys deployed on drifting multiyear sea ice in the Eastern Arctic-Barents Sea during CEAREX (1988-89...

  1. Discrimination symbol applying method for sintered nuclear fuel product

    International Nuclear Information System (INIS)

    Ishizaki, Jin

    1998-01-01

    The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)

  2. Method of applying a mirror reflecting layer to instrument parts

    Science.gov (United States)

    Alkhanov, L. G.; Danilova, I. A.; Delektorskiy, G. V.

    1974-01-01

    A method follows for applying a mirror reflecting layer to the surfaces of parts, instruments, apparatus, and so on. A brief analysis is presented of the existing methods of obtaining the mirror surface and the advantages of the new method of obtaining the mirror surface by polymer casting mold are indicated.

  3. Building "Applied Linguistic Historiography": Rationale, Scope, and Methods

    Science.gov (United States)

    Smith, Richard

    2016-01-01

    In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…

  4. Application of a Real-time Reverse Transcription Loop Mediated Amplification Method to the Detection of Rabies Virus in Arctic Foxes in Greenland

    DEFF Research Database (Denmark)

    Wakeley, Philip; Johnson, Nicholas; Rasmussen, Thomas Bruun

    Reverse transcription loop mediated amplification (RT-LAMP) offers a rapid, isothermal method for amplification of virus RNA. In this study a panel of positive rabies virus samples originally prepared from arctic fox brain tissue was assessed for the presence of rabies viral RNA using a real time...... RT-LAMP. The method had previously been shown to work with samples from Ghana which clustered with cosmopolitan lineage rabies viruses but the assay had not been assessed using samples from animals infected with rabies from the arctic region. The assay is designed to amplify both cosmopolitan strains...... virus of arctic origin virus can be detected using RT-LAMP and the method reported is more rapid than the real-time RT-PCR. Further arctic fox samples are under analysis in order to confirm these findings....

  5. Limited dietary overlap amongst resident Arctic herbivores in winter: complementary insights from complementary methods.

    Science.gov (United States)

    Schmidt, Niels M; Mosbacher, Jesper B; Vesterinen, Eero J; Roslin, Tomas; Michelsen, Anders

    2018-04-26

    Snow may prevent Arctic herbivores from accessing their forage in winter, forcing them to aggregate in the few patches with limited snow. In High Arctic Greenland, Arctic hare and rock ptarmigan often forage in muskox feeding craters. We therefore hypothesized that due to limited availability of forage, the dietary niches of these resident herbivores overlap considerably, and that the overlap increases as winter progresses. To test this, we analyzed fecal samples collected in early and late winter. We used molecular analysis to identify the plant taxa consumed, and stable isotope ratios of carbon and nitrogen to quantify the dietary niche breadth and dietary overlap. The plant taxa found indicated only limited dietary differentiation between the herbivores. As expected, dietary niches exhibited a strong contraction from early to late winter, especially for rock ptarmigan. This may indicate increasing reliance on particular plant resources as winter progresses. In early winter, the diet of rock ptarmigan overlapped slightly with that of muskox and Arctic hare. Contrary to our expectations, no inter-specific dietary niche overlap was observed in late winter. This overall pattern was specifically revealed by combined analysis of molecular data and stable isotope contents. Hence, despite foraging in the same areas and generally feeding on the same plant taxa, the quantitative dietary overlap between the three herbivores was limited. This may be attributable to species-specific consumption rates of plant taxa. Yet, Arctic hare and rock ptarmigan may benefit from muskox opening up the snow pack, thereby allowing them to access the plants.

  6. Quantitative EEG Applying the Statistical Recognition Pattern Method

    DEFF Research Database (Denmark)

    Engedal, Knut; Snaedal, Jon; Hoegh, Peter

    2015-01-01

    BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...

  7. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    The harmonics detection method based on neural network applied to harmonics compensation. R Dehini, A Bassou, B Ferdi. Abstract. Several different methods have been used to sense load currents and extract its harmonic component in order to produce a reference current in shunt active power filters (SAPF), and to ...

  8. Arctic Climate Systems Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ivey, Mark D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Backus, George A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peterson, Kara J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); van Bloemen Waanders, Bart G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Desilets, Darin Maurice [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reinert, Rhonda Karen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    This study began with a challenge from program area managers at Sandia National Laboratories to technical staff in the energy, climate, and infrastructure security areas: apply a systems-level perspective to existing science and technology program areas in order to determine technology gaps, identify new technical capabilities at Sandia that could be applied to these areas, and identify opportunities for innovation. The Arctic was selected as one of these areas for systems level analyses, and this report documents the results. In this study, an emphasis was placed on the arctic atmosphere since Sandia has been active in atmospheric research in the Arctic since 1997. This study begins with a discussion of the challenges and benefits of analyzing the Arctic as a system. It goes on to discuss current and future needs of the defense, scientific, energy, and intelligence communities for more comprehensive data products related to the Arctic; assess the current state of atmospheric measurement resources available for the Arctic; and explain how the capabilities at Sandia National Laboratories can be used to address the identified technological, data, and modeling needs of the defense, scientific, energy, and intelligence communities for Arctic support.

  9. Applying the Taguchi method for optimized fabrication of bovine ...

    African Journals Online (AJOL)

    The objective of the present study was to optimize the fabrication of bovine serum albumin (BSA) nanoparticle by applying the Taguchi method with characterization of the nanoparticle bioproducts. BSA nanoparticles have been extensively studied in our previous works as suitable carrier for drug delivery, since they are ...

  10. Arctic Newcomers

    DEFF Research Database (Denmark)

    Tonami, Aki

    2013-01-01

    Interest in the Arctic region and its economic potential in Japan, South Korea and Singapore was slow to develop but is now rapidly growing. All three countries have in recent years accelerated their engagement with Arctic states, laying the institutional frameworks needed to better understand...... and influence policies relating to the Arctic. But each country’s approach is quite different, writes Aki Tonami....

  11. Arctic (and Antarctic) Observing Experiment - an Assessment of Methods to Measure Temperature over Polar Environments

    Science.gov (United States)

    Rigor, I. G.; Clemente-Colon, P.; Nghiem, S. V.; Hall, D. K.; Woods, J. E.; Henderson, G. R.; Zook, J.; Marshall, C.; Gallage, C.

    2014-12-01

    The Arctic environment has been undergoing profound changes; the most visible is the dramatic decrease in Arctic sea ice extent (SIE). These changes pose a challenge to our ability to measure surface temperature across the Polar Regions. Traditionally, the International Arctic Buoy Programme (IABP) and International Programme for Antarctic Buoys (IPAB) have measured surface air temperature (SAT) at 2-m height, which minimizes the ambiguity of measurements near of the surface. Specifically, is the temperature sensor measuring open water, snow, sea ice, or air? But now, with the dramatic decrease in Arctic SIE, increase in open water during summer, and the frailty of the younger sea ice pack, the IABP has had to deploy and develop new instruments to measure temperature. These instruments include Surface Velocity Program (SVP) buoys, which are commonly deployed on the world's ice-free oceans and typically measure sea surface temperature (SST), and the new robust Airborne eXpendable Ice Beacons (AXIB), which measure both SST and SAT. "Best Practice" requires that these instruments are inter-compared, and early results showing differences in collocated temperature measurements of over 2°C prompted the establishment of the IABP Arctic Observing Experiment (AOX) buoy test site at the US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) site in Barrow, Alaska. Preliminary results showed that the color of the hull of SVP buoys introduces a bias due to solar heating of the buoy. Since then, we have recommended that buoys should be painted white to reduce biases in temperature measurements due to different colors of the buoys deployed in different regions of the Arctic or the Antarctic. Measurements of SAT are more robust, but some of the temperature shields are susceptible to frosting. During our presentation we will provide an intercomparison of the temperature measurements at the AOX test site (i.e. high quality DOE/ARM observations compared with

  12. Linear algebraic methods applied to intensity modulated radiation therapy.

    Science.gov (United States)

    Crooks, S M; Xing, L

    2001-10-01

    Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

  13. Methods of applied mathematics with a software overview

    CERN Document Server

    Davis, Jon H

    2016-01-01

    This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...

  14. Using VPython to Apply Mathematics to Physics in Mathematical Methods

    Science.gov (United States)

    Demaree, Dedra; Eagan, J.; Finn, P.; Knight, B.; Singleton, J.; Therrien, A.

    2006-12-01

    At the College of the Holy Cross, the sophomore mathematical methods of physics students completed VPython programming projects. This is the first time VPython has been used in a physics course at this college. These projects were aimed at applying some methods learned to actual physical situations. Students first completed worksheets from North Carolina State University to learn the programming environment. They then used VPython to apply the mathematics of vectors and differential equations learned in class to solve physics situations which appear simple but are not easy to solve analytically. For most of these students it was their first programming experience. It was also one of the only chances we had to do actual physics applications during the semester due to the large amount of mathematical content covered. In addition to showcasing the students’ final programs, this poster will share their view of including VPython in this course.

  15. Synthetic data. A proposed method for applied risk management

    OpenAIRE

    Carbajal De Nova, Carolina

    2017-01-01

    The proposed method attempts to contribute towards the econometric and simulation applied risk management literature. It consists on an algorithm to construct synthetic data and risk simulation econometric models, supported by a set of behavioral assumptions. This algorithm has the advantage of replicating natural phenomena and uncertainty events in a short period of time. These features convey economically low costs besides computational efficiency. An application for wheat farmers is develo...

  16. Newton-Krylov methods applied to nonequilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Knoll, D.A.; Rider, W.J.; Olsen, G.L.

    1998-01-01

    The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step

  17. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  18. Analysis of concrete beams using applied element method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

  19. Applying sample survey methods to clinical trials data.

    Science.gov (United States)

    LaVange, L M; Koch, G G; Schwartz, T A

    This paper outlines the utility of statistical methods for sample surveys in analysing clinical trials data. Sample survey statisticians face a variety of complex data analysis issues deriving from the use of multi-stage probability sampling from finite populations. One such issue is that of clustering of observations at the various stages of sampling. Survey data analysis approaches developed to accommodate clustering in the sample design have more general application to clinical studies in which repeated measures structures are encountered. Situations where these methods are of interest include multi-visit studies where responses are observed at two or more time points for each patient, multi-period cross-over studies, and epidemiological studies for repeated occurrences of adverse events or illnesses. We describe statistical procedures for fitting multiple regression models to sample survey data that are more effective for repeated measures studies with complicated data structures than the more traditional approaches of multivariate repeated measures analysis. In this setting, one can specify a primary sampling unit within which repeated measures have intraclass correlation. This intraclass correlation is taken into account by sample survey regression methods through robust estimates of the standard errors of the regression coefficients. Regression estimates are obtained from model fitting estimation equations which ignore the correlation structure of the data (that is, computing procedures which assume that all observational units are independent or are from simple random samples). The analytic approach is straightforward to apply with logistic models for dichotomous data, proportional odds models for ordinal data, and linear models for continuously scaled data, and results are interpretable in terms of population average parameters. Through the features summarized here, the sample survey regression methods have many similarities to the broader family of

  20. Classification of Specialized Farms Applying Multivariate Statistical Methods

    Directory of Open Access Journals (Sweden)

    Zuzana Hloušková

    2017-01-01

    Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.

  1. Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.

    Science.gov (United States)

    Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal

    2018-01-01

    Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.

  2. Metrological evaluation of characterization methods applied to nuclear fuels

    International Nuclear Information System (INIS)

    Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho

    2010-01-01

    In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the

  3. Nuclear and nuclear related analytical methods applied in environmental research

    International Nuclear Information System (INIS)

    Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.

    2010-01-01

    Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)

  4. Analysis of Brick Masonry Wall using Applied Element Method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.

  5. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  6. Analytical methods applied to diverse types of Brazilian propolis

    Directory of Open Access Journals (Sweden)

    Marcucci Maria

    2011-06-01

    Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.

  7. Arctic methane

    NARCIS (Netherlands)

    Dyupina, E.; Amstel, van A.R.

    2013-01-01

    What are the risks of a runaway greenhouse effect from methane release from hydrates in the Arctic? In January 2013, a dramatic increase of methane concentration up to 2000 ppb has been measured over the Arctic north of Norway in the Barents Sea. The global average being 1750 ppb. It has been

  8. Arctic Newcomers

    DEFF Research Database (Denmark)

    Tonami, Aki

    2013-01-01

    Interest in the Arctic region and its economic potential in Japan, South Korea and Singapore was slow to develop but is now rapidly growing. All three countries have in recent years accelerated their engagement with Arctic states, laying the institutional frameworks needed to better understand...

  9. Method of applying a coating onto a steel plate

    International Nuclear Information System (INIS)

    Masuda, Hiromasa; Murakami, Shozo; Chihara, Yoshihi.

    1970-01-01

    A method of applying a protective coating onto a steel plate to protect it from corrosion is given, using an irradiation process and a vehicle consisting of a radically polymerizable high molecular compound, a radically polymerizable less-volatile monomer and/or a functional intermediate agent, and a volatile solvent. The radiation may be electron beams at an energy level ranging from 100 to 1,000 keV. An advantage of this invention is that the ratio of the prepolymer to the monomer can be kept constant without difficulty during the irradiation operation, so that the variation in thickness is very small. Another advantage is that the addition of a monomer is not necessary for viscosity reduction, so that the optimum cross-linking density can be obtained. The molecular weight is so high that application by spraying is possible. The solvent remaining after the irradiation operation has substantially no influence on the polymerization hardening and gel content. In one example, 62 parts of prepolymer produced by reacting an epoxy resin Epikote No.1001 with an equal equivalent of acrylic acid were mixed with 17 parts of hydroxyl ethyl acrylate, 77.5 parts of methyl ethyl ketone and 5.5 parts of isopropyl alcohol to produce a vehicle composition. This composition was applied onto the surface of glass plate 20 microns in thickness. The monomer remaining in the mixture showed a very small change over an elapsed period of time. (Iwakiri, K.)

  10. On interval methods applied to robot reliability quantification

    International Nuclear Information System (INIS)

    Carreras, C.; Walker, I.D.

    2000-01-01

    Interval methods have recently been successfully applied to obtain significantly improved robot reliability estimates via fault trees for the case of uncertain and time-varying input reliability data. These initial studies generated output distributions of failure probabilities by extending standard interval arithmetic with new abstractions called interval grids which can be parameterized to control the complexity and accuracy of the estimation process. In this paper different parameterization strategies are evaluated in order to gain a more complete understanding of the potential benefits of the approach. A canonical example of a robot manipulator system is used to show that an appropriate selection of parameters is a key issue for the successful application of such novel interval-based methodologies

  11. Parallel fast multipole boundary element method applied to computational homogenization

    Science.gov (United States)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  12. Assessment of the vulnerability to erosion for the Arctic coast

    Science.gov (United States)

    Sulisz, Wojciech; Suszka, Lechoslaw; Veic, Duje; Wolff, Claudia; Paprota, Maciej; Majewski, Dawid

    2017-09-01

    A novel approach to assess vulnerability to erosion for the Arctic coastline is proposed and a concept of a Polar Coastline Risk Index (PCRI) is introduced. This original concept is applied in the present study to the Arctic coastline. The PCRI is calculated by applying data from the DIVA and ACD databases. The investigations are supported by the open water season data obtained from satellite images of the European Organization for the Exploitation of Meteorological Satellites. The derived concept seems to be a better method of determining vulnerability to erosion for polar coastal regions than previous approaches.

  13. The virtual fields method applied to spalling tests on concrete

    Directory of Open Access Journals (Sweden)

    Forquin P.

    2012-08-01

    Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.

  14. Potassium fertilizer applied by different methods in the zucchini crop

    Directory of Open Access Journals (Sweden)

    Carlos N. V. Fernandes

    Full Text Available ABSTRACT Aiming to evaluate the effect of potassium (K doses applied by the conventional method and fertigation in zucchini (Cucurbita pepo L., a field experiment was conducted in Fortaleza, CE, Brazil. The statistical design was a randomized block, with four replicates, in a 4 x 2 factorial scheme, which corresponded to four doses of K (0, 75, 150 and 300 kg K2O ha-1 and two fertilization methods (conventional and fertigation. The analyzed variables were: fruit mass (FM, number of fruits (NF, fruit length (FL, fruit diameter (FD, pulp thickness (PT, soluble solids (SS, yield (Y, water use efficiency (WUE and potassium use efficiency (KUE, besides an economic analysis using the net present value (NPV, internal rate of return (IRR and payback period (PP. K doses influenced FM, FD, PT and Y, which increased linearly, with the highest value estimated at 36,828 kg ha-1 for the highest K dose (300 kg K2O ha-1. This dose was also responsible for the largest WUE, 92 kg ha-1 mm-1. KUE showed quadratic behavior and the dose of 174 kg K2O ha-1 led to its maximum value (87.41 kg ha-1 (kg K2O ha-1-1. All treatments were economically viable, and the most profitable months were May, April, December and November.

  15. A randomized controlled trial of the Arctic Sun Temperature Management System versus conventional methods for preventing hypothermia during off-pump cardiac surgery.

    Science.gov (United States)

    Grocott, Hilary P; Mathew, Joseph P; Carver, Elizabeth H; Phillips-Bute, Barbara; Landolfo, Kevin P; Newman, Mark F

    2004-02-01

    In this trial we compared the hypothermia avoidance abilities of the Arctic Sun Temperature Management System (a servo-regulated system that circulates temperature-controlled water through unique energy transfer pads adherent to the patient's body) with conventional temperature control methods. Patients undergoing off-pump coronary artery bypass (OPCAB) surgery were randomized to either the Arctic Sun System alone (AS group) or conventional methods (control group; increased room temperature, heated IV fluids, convective forced air warming system) for the prevention of hypothermia (defined by a temperature temperature servo-regulated to a target of 36.8 degrees C. Temperature was recorded throughout the operative period and comparisons were made between groups for both the time and area under the curve (AUC) for a temperature control group = 15) were studied. The AS group had significantly less hypothermia than the control group, both for duration of time control group; P = 0.0008) as well as for AUCcontrol group; P = 0.002). The Arctic Sun Temperature Management System significantly reduced intraoperative hypothermia during OPCAB surgery. Importantly, this was achieved in the absence of any other temperature modulating techniques, including the use of IV fluid warming or increases in the ambient operating room temperature. The Arctic Sun Temperature Management System was more effective than conventional methods in preventing hypothermia during off-pump coronary artery bypass graft surgery.

  16. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    Science.gov (United States)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS

  17. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled

  18. Analytic methods in applied probability in memory of Fridrikh Karpelevich

    CERN Document Server

    Suhov, Yu M

    2002-01-01

    This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable

  19. Primary hepatocytes from Arctic char (Salvelinus alpinus) as a relevant Arctic in vitro model for screening contaminants and environmental extracts.

    Science.gov (United States)

    Petersen, Karina; Hultman, Maria T; Tollefsen, Knut Erik

    2017-06-01

    Contaminants find their way to the Arctic through long-range atmospheric transport, transport via ocean currents, and through increased anthropogenic activity. Some of the typical pollutants reaching the Arctic (PAHs, PCBs) are known to induce cytochrome P450 1a (CYP1A) protein expression and ethoxyresorufin-O-deethylase (EROD) activity through the aryl hydrocarbon receptor (AhR). In addition, some endocrine disrupting chemicals (EDCs) such as estrogen mimics (xenoestrogens) have been documented in Arctic areas and they may interfere with natural sexual development and reproduction. In vitro assays that are capable of detecting effects of such pollutants, covering multiple endpoints, are generally based on mammalian or temperate species and there are currently no well-characterized cell-based in vitro assays for effect assessment from Arctic fish species. The present study aimed to develop a high-throughput and multi-endpoint in vitro assay from Arctic char (Salvelinus alpinus) to provide a non-animal (alternative) testing method for an ecologically relevant Arctic species. A method for isolation and exposure of primary hepatocytes from Arctic char for studying the toxic effects and mode of action (MoA) of pollutants was applied and validated. The multi-versatility of the bioassay was assessed by classical biomarker responses such as cell viability (membrane integrity and metabolic activity), phase I detoxification (CYP1A protein expression, EROD activity) and estrogen receptor (ER) mediated vitellogenin (Vtg) protein expression using a selection of model compounds, environmental pollutants and an environmental extract containing a complex mixture of pollutants. Primary hepatocytes from Arctic char were successfully isolated and culture conditions optimized to identify the most optimal assay conditions for covering multiple endpoints. The hepatocytes responded with concentration-dependent responses to all of the model compounds, most of the environmental pollutants

  20. Review on finite element method | Erhunmwun | Journal of Applied ...

    African Journals Online (AJOL)

    Journal of Applied Sciences and Environmental Management. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 21, No 5 (2017) >. Log in or Register to get access to full text downloads.

  1. Acti-Glide: a simple method of applying compression hosiery.

    Science.gov (United States)

    Hampton, Sylvie

    2005-05-01

    Compression hosiery is often worn to help prevent aching legs and swollen ankles, to prevent ulceration, to treat venous ulceration or to treat varicose veins. However, patients and nurses may experience problems applying hosiery and this can lead to non-concordance in patients and possibly reluctance from nurses to use compression hosiery. A simple solution to applying firm hosiery is Acti-Glide from Activa Healthcare.

  2. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  3. Applied Research of Decision Tree Method on Football Training

    Directory of Open Access Journals (Sweden)

    Liu Jinhui

    2015-01-01

    Full Text Available This paper will make an analysis of decision tree at first, and then offer a further analysis of CLS based on it. As CLS contains the most substantial and most primitive decision-making idea, it can provide the basis of decision tree establishment. Due to certain limitation in details, the ID3 decision tree algorithm is introduced to offer more details. It applies information gain as attribute selection metrics to provide reference for seeking the optimal segmentation point. At last, the ID3 algorithm is applied in football training. Verification is made on this algorithm and it has been proved effectively and reasonably.

  4. Muon radiography method for fundamental and applied research

    Science.gov (United States)

    Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.

    2017-12-01

    This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.

  5. New Method for Tuning Robust Controllers Applied to Robot Manipulators

    Directory of Open Access Journals (Sweden)

    Gerardo Romero

    2012-11-01

    Full Text Available This paper presents a methodology to select the parameters of a nonlinear controller using Linear Matrix Inequalities (LMI. The controller is applied to a robotic manipulator to improve its robustness. This type of dynamic system enables the robust control law to be applied because it largely depends on the mathematical model of the system; however, in most cases it is impossible to be completely precise. The discrepancy between the dynamic behaviour of the robot and its mathematical model is taken into account by including a nonlinear term that represents the model's uncertainty. The controller's parameters are selected with two purposes: to guarantee the asymptotic stability of the closed-loop system while taking into account the uncertainty, and to increase its robustness margin. The results are validated with numerical simulations for a particular case study; these are then compared with previously published results to prove a better controller performance.

  6. Waste classification and methods applied to specific disposal sites

    International Nuclear Information System (INIS)

    Rogers, V.C.

    1979-01-01

    An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs

  7. Review on finite element method | Erhunmwun | Journal of Applied ...

    African Journals Online (AJOL)

    ... finite elements, so that it is possible to systematically construct the approximation functions needed in a variational or weighted-residual approximation of the solution of a problem over each element. Keywords: Weak Formulation, Discretisation, Numerical methods, Finite element method, Global equations, Nodal solution ...

  8. The flow curvature method applied to canard explosion

    Energy Technology Data Exchange (ETDEWEB)

    Ginoux, Jean-Marc [Laboratoire Protee, IUT de Toulon, Universite du Sud, BP 20132, F-83957 La Garde cedex (France); Llibre, Jaume, E-mail: ginoux@univ-tln.fr, E-mail: jllibre@mat.uab.cat [Departament de Matematiques, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2011-11-18

    The aim of this work is to establish that the bifurcation parameter value leading to a canard explosion in dimension 2 obtained by the so-called geometric singular perturbation method can be found according to the flow curvature method. This result will be then exemplified with the classical Van der Pol oscillator. (paper)

  9. The flow curvature method applied to canard explosion

    Science.gov (United States)

    Ginoux, Jean-Marc; Llibre, Jaume

    2011-11-01

    The aim of this work is to establish that the bifurcation parameter value leading to a canard explosion in dimension 2 obtained by the so-called geometric singular perturbation method can be found according to the flow curvature method. This result will be then exemplified with the classical Van der Pol oscillator.

  10. Literature Review of Applying Visual Method to Understand Mathematics

    Directory of Open Access Journals (Sweden)

    Yu Xiaojuan

    2015-01-01

    Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.

  11. Methodical Aspects of Applying Strategy Map in an Organization

    Directory of Open Access Journals (Sweden)

    Piotr Markiewicz

    2013-06-01

    Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.

  12. Diagrammatic Monte Carlo method as applied to the polaron problem

    International Nuclear Information System (INIS)

    Mishchenko, A.S.

    2005-01-01

    Exact numerical solution methods for the problem of a few particles interacting with one another and with several bosonic excitation modes are presented. The diagrammatic Monte Carlo method allows the exact calculation of the Green function, and the stochastic optimization technique provides an analytic continuation. Results unobtainable by conventional methods are discussed, including the properties of excited states in the self-trapping phenomenon, the optical spectra of polarons in all coupling regimes, the validity analysis of the exciton models, and the photoemission spectra of a phonon-coupled hole [ru

  13. Applying a life cycle approach to project management methods

    OpenAIRE

    Biggins, David; Trollsund, F.; Høiby, A.L.

    2016-01-01

    Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...

  14. Method for curing alkyd resin compositions by applying ionizing radiation

    International Nuclear Information System (INIS)

    Watanabe, T.; Murata, K.; Maruyama, T.

    1975-01-01

    An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)

  15. Arctic Security

    DEFF Research Database (Denmark)

    Wang, Nils

    2013-01-01

    The inclusion of China, India, Japan, Singapore and Italy as permanent observers in the Arctic Council has increased the international status of this forum significantly. This chapter aims to explain the background for the increased international interest in the Arctic region through an analysis...... of the general security situation and to identify both the explicit and the implicit agendas of the primary state actors. The region contains all the ingredients for confrontation and conflict but the economical potential for all the parties concerned creates a general interest in dialogue and cooperation...

  16. Spectral methods applied to fluidized bed combustors. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, R.C.; Christofides, N.J.; Junk, K.W.; Raines, T.S.; Thiede, T.D.

    1996-08-01

    The objective of this project was to develop methods for characterizing fuels and sorbents from time-series data obtained during transient operation of fluidized bed boilers. These methods aimed at determining time constants for devolatilization and char burnout using carbon dioxide (CO{sub 2}) profiles and from time constants for the calcination and sulfation processes using CO{sub 2} and sulfur dioxide (SO{sub 2}) profiles.

  17. Apply of torque method at rationalization of work

    Directory of Open Access Journals (Sweden)

    Bandurová Miriam

    2001-03-01

    Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.

  18. Thermoluminescence as a dating method applied to the Morocco Neolithic

    International Nuclear Information System (INIS)

    Ousmoi, M.

    1989-09-01

    Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr

  19. Evaluation of Controller Tuning Methods Applied to Distillation Column Control

    DEFF Research Database (Denmark)

    Nielsen, Kim; W. Andersen, Henrik; Kümmel, Professor Mogens

    A frequency domain approach is used to compare the nominal performance and robustness of dual composition distillation column control tuned according to Ziegler-Nichols (ZN) and Biggest Log Modulus Tuning (BLT) for three binary distillation columns, WOBE, LUVI and TOFA. The scope...... of this is to examine whether ZN and BLT design yield satisfactory control of distillation columns. Further, PI controllers are tuned according to a proposed multivariable frequency domain method. A major conclusion is that the ZN tuned controllers yield undesired overshoot and oscillation and poor stability robustness...... properties. BLT tuning removes the overshoot and oscillation, however, at the expense of a more sluggish response. We conclude that if a simple control design is to be used, the BLT method should be referred compared to the ZN method. The frequency domain design approach presented yields a more proper trade...

  20. Modal method for crack identification applied to reactor recirculation pump

    International Nuclear Information System (INIS)

    Miller, W.H.; Brook, R.

    1991-01-01

    Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data

  1. Boron autoradiography method applied to the study of steels

    International Nuclear Information System (INIS)

    Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.

    1986-01-01

    The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es

  2. Diagrammatic Monte Carlo method as applied to the polaron problems

    International Nuclear Information System (INIS)

    Mishchenko, Andrei S

    2005-01-01

    Numerical methods whereby exact solutions to the problem of a few particles interacting with one another and with several bosonic excitation branches are presented. The diagrammatic Monte Carlo method allows the exact calculation of the Matsubara Green function, and the stochastic optimization technique provides an approximation-free analytic continuation. In this review, results unobtainable by conventional methods are discussed, including the properties of excited states in the self-trapping phenomenon, the optical spectra of polarons in all coupling regimes, the validity range analysis of the Frenkel and Wannier approximations relevant to the exciton, and the peculiarities of photoemission spectra of a lattice-coupled hole in a Mott insulator. (reviews of topical problems)

  3. DAKOTA reliability methods applied to RAVEN/RELAP-7.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea

    2013-09-01

    This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).

  4. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  5. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  6. Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures

    DEFF Research Database (Denmark)

    Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard

    2013-01-01

    of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex....... The ability to handle closely spaced modes and broad frequency ranges is investigated for a numerical model of a lightweight junction under dierent signal-to-noise ratios. The selection of both excitation points and response points are discussed. It is found that both the Rational Fraction Polynomial-Z method...

  7. Efficient electronic structure methods applied to metal nanoparticles

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth

    of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...

  8. Applying the Priority Distribution Method for Employee Motivation

    Directory of Open Access Journals (Sweden)

    Jonas Žaptorius

    2013-09-01

    Full Text Available In an age of increasing healthcare expenditure, the efficiency of healthcare services is a burning issue. This paper deals with the creation of a performance-related remuneration system, which would meet requirements for efficiency and sustainable quality. In real world scenarios, it is difficult to create an objective and transparent employee performance evaluation model dealing with both qualitative and quantitative criteria. To achieve these goals, the use of decision support methods is suggested and analysed. The systematic approach of practical application of the Priority Distribution Method to healthcare provider organisations is created and described.

  9. Non-perturbative methods applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1982-09-01

    The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt

  10. Tutte’s barycenter method applied to isotopies

    NARCIS (Netherlands)

    Colin de Verdière, Éric; Pocchiola, Michel; Vegter, Gert

    2003-01-01

    This paper is concerned with applications of Tutte’s barycentric embedding theorem. It presents a method for building isotopies of triangulations in the plane, based on Tutte’s theorem and the computation of equilibrium stresses of graphs by Maxwell–Cremona’s theorem; it also provides a

  11. Inversion method applied to the rotation curves of galaxies

    Science.gov (United States)

    Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.

    2017-07-01

    We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.

  12. E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS

    Directory of Open Access Journals (Sweden)

    GOANTA Adrian Mihai

    2011-11-01

    Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.

  13. Some methods of computational geometry applied to computer graphics

    NARCIS (Netherlands)

    Overmars, M.H.; Edelsbrunner, H.; Seidel, R.

    1984-01-01

    Abstract Windowing a two-dimensional picture means to determine those line segments of the picture that are visible through an axis-parallel window. A study of some algorithmic problems involved in windowing a picture is offered. Some methods from computational geometry are exploited to store the

  14. [Synchrotron-based characterization methods applied to ancient materials (I)].

    Science.gov (United States)

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences.

  15. About the Finite Element Method Applied to Thick Plates

    Directory of Open Access Journals (Sweden)

    Mihaela Ibănescu

    2006-01-01

    Full Text Available The present paper approaches of plates subjected to transverse loads, when the shear force and the actual boundary conditions are considered, by using the Finite Element Method. The isoparametric finite elements create real facilities in formulating the problems and great possibilities in creating adequate computer programs.

  16. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    with MATLAB Simulink Power System Toolbox. The simulation study results of this novel technique compared to other similar methods are found quite satisfactory by assuring good filtering characteristics and high system stability. Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic ...

  17. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    Science.gov (United States)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  18. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  19. Generic Methods for Formalising Sequent Calculi Applied to Provability Logic

    Science.gov (United States)

    Dawson, Jeremy E.; Goré, Rajeev

    We describe generic methods for reasoning about multiset-based sequent calculi which allow us to combine shallow and deep embeddings as desired. Our methods are modular, permit explicit structural rules, and are widely applicable to many sequent systems, even to other styles of calculi like natural deduction and term rewriting systems. We describe new axiomatic type classes which enable simplification of multiset or sequent expressions using existing algebraic manipulation facilities. We demonstrate the benefits of our combined approach by formalising in Isabelle/HOL a variant of a recent, non-trivial, pen-and-paper proof of cut-admissibility for the provability logic GL, where we abstract a large part of the proof in a way which is immediately applicable to other calculi. Our work also provides a machine-checked proof to settle the controversy surrounding the proof of cut-admissibility for GL.

  20. Applying probabilistic methods for assessments and calculations for accident prevention

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de

  1. The transfer matrix method applied to steel sheet pile walls

    Science.gov (United States)

    Kort, D. A.

    2003-05-01

    This paper proposes two subgrade reaction models for the analysis of steel sheet pile walls based on the transfer matrix method. In the first model a plastic hinge is generated when the maximum moment in the retaining structure is exceeded. The second model deals with a beam with an asymmetrical cross-section that can bend in two directions.In the first part of this paper the transfer matrix method is explained on the basis of a simple example. Further the development of two computer models is described: Plaswall and Skewwall.The second part of this paper deals with an application of both models. In the application of Plaswall the effect of four current earth pressure theories to the subgrade reaction method is compared to a finite element calculation. It is shown that the earth pressure theory is of major importance on the calculation result of a sheet pile wall both with and without a plastic hinge.In the application of Skewwall the effectiveness of structural measures to reduce oblique bending is investigated. The results are compared to a 3D finite element calculation. It is shown that with simple structural measures the loss of structural resistance due to oblique bending can be reduced.

  2. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  3. State of the Arctic Environment

    International Nuclear Information System (INIS)

    1990-01-01

    The Arctic environment, covering about 21 million km 2 , is in this connection regarded as the area north of the Arctic Circle. General biological and physical features of the terrestrial and freshwater environments of the Arctic are briefly described, but most effort is put into a description of the marine part which constitutes about two-thirds of the total Arctic environment. General oceanography and morphological characteristics are included; e.g. that the continental shelf surrounding the Arctic deep water basins covers approximately 36% of the surface areas of Arctic waters, but contains only 2% of the total water masses. Blowout accident may release thousands of tons of oil per day and last for months. They occur statistically very seldom, but the magnitude underlines the necessity of an efficient oil spill contingency as well as sound safety and quality assurance procedures. Contingency plans should be coordinated and regularly evaluated through simulated and practical tests of performance. Arctic conditions demand alternative measures compared to those otherwise used for oil spill prevention and clean-up. New concepts or optimization of existing mechanical equipment is necessary. Chemical and thermal methods should be evaluated for efficiency and possible environmental effects. Both due to regular discharges of oil contaminated drilled cuttings and the possibility of a blowout or other spills, drilling operations in biological sensitive areas may be regulated to take place only during the less sensitive parts of the year. 122 refs., 8 figs., 8 tabs

  4. System and method of applying energetic ions for sterilization

    Science.gov (United States)

    Schmidt, John A.

    2003-12-23

    A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.

  5. Interesting Developments in Testing Methods Applied to Foundation Piles

    Science.gov (United States)

    Sobala, Dariusz; Tkaczyński, Grzegorz

    2017-10-01

    Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.

  6. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  7. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    Science.gov (United States)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  8. Modern analytic methods applied to the art and archaeology

    International Nuclear Information System (INIS)

    Tenorio C, M. D.; Longoria G, L. C.

    2010-01-01

    The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)

  9. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  10. The Movable Type Method Applied to Protein-Ligand Binding.

    Science.gov (United States)

    Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M

    2013-12-10

    Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free

  11. Applied statistical methods in agriculture, health and life sciences

    CERN Document Server

    Lawal, Bayo

    2014-01-01

    This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.

  12. Applying Simulation Method in Formulation of Gluten-Free Cookies

    Directory of Open Access Journals (Sweden)

    Nikitina Marina

    2017-01-01

    Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.

  13. Applying Human-Centered Design Methods to Scientific Communication Products

    Science.gov (United States)

    Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.

    2016-12-01

    Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.

  14. Simplified Methods Applied to Nonlinear Motion of Spar Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haslum, Herbjoern Alf

    2000-07-01

    Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft

  15. Perturbation Method of Analysis Applied to Substitution Measurements of Buckling

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-11-15

    Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.

  16. Variational methods applied to problems of diffusion and reaction

    CERN Document Server

    Strieder, William

    1973-01-01

    This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...

  17. Nondestructive methods of analysis applied to oriental swords

    Directory of Open Access Journals (Sweden)

    Edge, David

    2015-12-01

    Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.

  18. Complexity methods applied to turbulence in plasma astrophysics

    Science.gov (United States)

    Vlahos, L.; Isliker, H.

    2016-09-01

    In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the

  19. Connecting Arctic Research Across Boundaries through the Arctic Research Consortium of the United States (ARCUS)

    Science.gov (United States)

    Rich, R. H.; Myers, B.; Wiggins, H. V.; Zolkos, J.

    2017-12-01

    The complexities inherent in Arctic research demand a unique focus on making connections across the boundaries of discipline, institution, sector, geography, knowledge system, and culture. Since 1988, ARCUS has been working to bridge these gaps through communication, coordination, and collaboration. Recently, we have worked with partners to create a synthesis of the Arctic system, to explore the connectivity across the Arctic research community and how to strengthen it, to enable the community to have an effective voice in research funding policy, to implement a system for Arctic research community knowledge management, to bridge between global Sea Ice Prediction Network researchers and the science needs of coastal Alaska communities through the Sea Ice for Walrus Outlook, to strengthen ties between Polar researchers and educators, and to provide essential intangible infrastructure that enables cost-effective and productive research across boundaries. Employing expertise in managing for collaboration and interdisciplinarity, ARCUS complements and enables the work of its members, who constitute the Arctic research community and its key stakeholders. As a member-driven organization, everything that ARCUS does is achieved through partnership, with strong volunteer leadership of each activity. Key organizational partners in the United States include the U.S. Arctic Research Commission, Interagency Arctic Research Policy Committee, National Academy of Sciences Polar Research Board, and the North Slope Science Initiative. Internationally, ARCUS maintains strong bilateral connections with similarly focused groups in each Arctic country (and those interested in the Arctic), as well as with multinational organizations including the International Arctic Science Committee, the Association of Polar Early Career Educators, the University of the Arctic, and the Arctic Institute of North America. Currently, ARCUS is applying the best practices of the science of team science

  20. Arctic Ocean data in CARINA

    Directory of Open Access Journals (Sweden)

    S. Jutterström

    2010-02-01

    Full Text Available The paper describes the steps taken for quality controlling chosen parameters within the Arctic Ocean data included in the CARINA data set and checking for offsets between the individual cruises. The evaluated parameters are the inorganic carbon parameters (total dissolved inorganic carbon, total alkalinity and pH, oxygen and nutrients: nitrate, phosphate and silicate. More parameters can be found in the CARINA data product, but were not subject to a secondary quality control. The main method in determining offsets between cruises was regional multi-linear regression, after a first rough basin-wide deep-water estimate of each parameter. Lastly, the results of the secondary quality control are discussed as well as applied adjustments.

  1. Trend analysis of Arctic sea ice extent

    Science.gov (United States)

    Silva, M. E.; Barbosa, S. M.; Antunes, Luís; Rocha, Conceição

    2009-04-01

    The extent of Arctic sea ice is a fundamental parameter of Arctic climate variability. In the context of climate change, the area covered by ice in the Arctic is a particularly useful indicator of recent changes in the Arctic environment. Climate models are in near universal agreement that Arctic sea ice extent will decline through the 21st century as a consequence of global warming and many studies predict a ice free Arctic as soon as 2012. Time series of satellite passive microwave observations allow to assess the temporal changes in the extent of Arctic sea ice. Much of the analysis of the ice extent time series, as in most climate studies from observational data, have been focussed on the computation of deterministic linear trends by ordinary least squares. However, many different processes, including deterministic, unit root and long-range dependent processes can engender trend like features in a time series. Several parametric tests have been developed, mainly in econometrics, to discriminate between stationarity (no trend), deterministic trend and stochastic trends. Here, these tests are applied in the trend analysis of the sea ice extent time series available at National Snow and Ice Data Center. The parametric stationary tests, Augmented Dickey-Fuller (ADF), Phillips-Perron (PP) and the KPSS, do not support an overall deterministic trend in the time series of Arctic sea ice extent. Therefore, alternative parametrizations such as long-range dependence should be considered for characterising long-term Arctic sea ice variability.

  2. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    Science.gov (United States)

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  3. Surface Temperature Trends in the Arctic Atlantic Region Over the Last 2,000 Years

    Science.gov (United States)

    Korhola, A.; Hanhijarvi, S.; Tingley, M.

    2013-12-01

    We introduce a new reconstruction method that uses the ordering of all pairs of proxy observations within each record to arrive at a consensus time series that best agrees with all proxy records. By considering only pairwise comparisons, this method, which we call PaiCo, facilitates the inclusion of records with differing temporal resolutions, and relaxes the assumption of linearity to the more general assumption of a monotonically increasing relationship between each proxy series and the target climate variable. We apply PaiCo to a newly assembled collection of high-quality proxy data to reconstruct the mean temperature of the Northernmost Atlantic region, which we call Arctic Atlantic, over the last 2,000 years. The Arctic Atlantic is a dynamically important region known to feature substantial temperature variability over recent millennia, and PaiCo allows for a more thorough investigation of the Arctic Atlantic regional climate as we include a diverse array of terrestrial and marine proxies with annual to multidecadal temporal resolutions. Comparisons of the PaiCo reconstruction to recent reconstructions covering larger areas indicate greater climatic variability in the Arctic Atlantic than for the Arctic as a whole. The Arctic Atlantic reconstruction features temperatures during the Roman Warm Period and Medieval Climate Anomaly that are comparable or even warmer than those of the twentieth century, and coldest temperatures in the middle of the nineteenth century, just prior to the onset of the recent warming trend.

  4. The Circumpolar Arctic vegetation map

    Science.gov (United States)

    Walker, Donald A.; Raynolds, Martha K.; Daniels, F.J.A.; Einarsson, E.; Elvebakk, A.; Gould, W.A.; Katenin, A.E.; Kholod, S.S.; Markon, C.J.; Melnikov, E.S.; Moskalenko, N.G.; Talbot, S. S.; Yurtsev, B.A.; Bliss, L.C.; Edlund, S.A.; Zoltai, S.C.; Wilhelm, M.; Bay, C.; Gudjonsson, G.; Ananjeva, G.V.; Drozdov, D.S.; Konchenko, L.A.; Korostelev, Y.V.; Ponomareva, O.E.; Matveyeva, N.V.; Safranova, I.N.; Shelkunova, R.; Polezhaev, A.N.; Johansen, B.E.; Maier, H.A.; Murray, D.F.; Fleming, Michael D.; Trahan, N.G.; Charron, T.M.; Lauritzen, S.M.; Vairin, B.A.

    2005-01-01

    Question: What are the major vegetation units in the Arctic, what is their composition, and how are they distributed among major bioclimate subzones and countries? Location: The Arctic tundra region, north of the tree line. Methods: A photo-interpretive approach was used to delineate the vegetation onto an Advanced Very High Resolution Radiometer (AVHRR) base image. Mapping experts within nine Arctic regions prepared draft maps using geographic information technology (ArcInfo) of their portion of the Arctic, and these were later synthesized to make the final map. Area analysis of the map was done according to bioclimate subzones, and country. The integrated mapping procedures resulted in other maps of vegetation, topography, soils, landscapes, lake cover, substrate pH, and above-ground biomass. Results: The final map was published at 1:7 500 000 scale map. Within the Arctic (total area = 7.11 x 106 km 2), about 5.05 ?? 106 km2 is vegetated. The remainder is ice covered. The map legend generally portrays the zonal vegetation within each map polygon. About 26% of the vegetated area is erect shrublands, 18% peaty graminoid tundras, 13% mountain complexes, 12% barrens, 11% mineral graminoid tundras, 11% prostrate-shrub tundras, and 7% wetlands. Canada has by far the most terrain in the High Arctic mostly associated with abundant barren types and prostrate dwarf-shrub tundra, whereas Russia has the largest area in the Low Arctic, predominantly low-shrub tundra. Conclusions: The CAVM is the first vegetation map of an entire global biome at a comparable resolution. The consistent treatment of the vegetation across the circumpolar Arctic, abundant ancillary material, and digital database should promote the application to numerous land-use, and climate-change applications and will make updating the map relatively easy. ?? IAVS; Opulus Press.

  5. Complementary biomarker-based methods for characterising Arctic sea ice conditions: A case study comparison between multivariate analysis and the PIP25 index

    Science.gov (United States)

    Köseoğlu, Denizcan; Belt, Simon T.; Smik, Lukas; Yao, Haoyi; Panieri, Giuliana; Knies, Jochen

    2018-02-01

    The discovery of IP25 as a qualitative biomarker proxy for Arctic sea ice and subsequent introduction of the so-called PIP25 index for semi-quantitative descriptions of sea ice conditions has significantly advanced our understanding of long-term paleo Arctic sea ice conditions over the past decade. We investigated the potential for classification tree (CT) models to provide a further approach to paleo Arctic sea ice reconstruction through analysis of a suite of highly branched isoprenoid (HBI) biomarkers in ca. 200 surface sediments from the Barents Sea. Four CT models constructed using different HBI assemblages revealed IP25 and an HBI triene as the most appropriate classifiers of sea ice conditions, achieving a >90% cross-validated classification rate. Additionally, lower model performance for locations in the Marginal Ice Zone (MIZ) highlighted difficulties in characterisation of this climatically-sensitive region. CT model classification and semi-quantitative PIP25-derived estimates of spring sea ice concentration (SpSIC) for four downcore records from the region were consistent, although agreement between proxy and satellite/observational records was weaker for a core from the west Svalbard margin, likely due to the highly variable sea ice conditions. The automatic selection of appropriate biomarkers for description of sea ice conditions, quantitative model assessment, and insensitivity to the c-factor used in the calculation of the PIP25 index are key attributes of the CT approach, and we provide an initial comparative assessment between these potentially complementary methods. The CT model should be capable of generating longer-term temporal shifts in sea ice conditions for the climatically sensitive Barents Sea.

  6. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    OpenAIRE

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commer...

  7. Arctic hurricanes

    Science.gov (United States)

    The devastating winter storms that swoop across the Arctic, endangering offshore oil rigs, shipping, and fishing operations in their paths, are the subject of current study by a team of weather researchers from the National Oceanic and Atmospheric Administration (NOAA). As part of the study, U.S. scientists and those from several other countries also will attempt to estimate how much carbon dioxide is transferred from the atmosphere into the North Atlantic's deep waters during winter storms.A typical polar low, like a hurricane, has a spiral cloud pattern and winds exceeding 120 km per hour, said Melvyn Shapiro, senior meteorologist on the polar-low study. The storms are smaller than most hurricanes, however, and rarely have a diameter greater than 320 km. Some, but not all, develop an “eye,” like a hurricane. Polar lows, only recently documented from polar orbiting satellite imagery, appear to form primarily from October to April, but peak in February.

  8. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  9. An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy

    Science.gov (United States)

    Gamso, Nancy M.

    2011-01-01

    The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…

  10. Benthic primary production and mineralization in a High Arctic fjord

    DEFF Research Database (Denmark)

    Attard, Karl; Hancke, Kasper; Sejr, Mikael K.

    2016-01-01

    Coastal and shelf systems likely exert major influence on Arctic Ocean functioning, yet key ecosystem processes remain poorly quantified. We employed the aquatic eddy covariance (AEC) oxygen (O2) flux method to estimate benthic primary production and mineralization in a High Arctic Greenland fjord...... light data, we estimate an annual Arctic Ocean benthic GPP of 11.5 × 107 t C yr−1. On average, this value represents 26% of the Arctic Ocean annual net phytoplankton production estimates. This scarcely considered component is thus potentially important for contemporary and future Arctic ecosystem...

  11. ARCTIC VECTOR OF BRITISH ENERGETIC STRATEGY

    Directory of Open Access Journals (Sweden)

    Natalia Valerievna Eremina

    2017-03-01

    Full Text Available The aim of the article is to reveal the forms, methods, content of British strategy in Arctic. Arctic is becoming the area of international cooperation among, first of all, Arctic states. Britain has ambitions to get the status of so-called “subarctic state” to prove its international leadership and acquire guarantees of energetic security. Now Britain has been elaborating the two strategies: military and scientific ones. The main instrument to solve the tasks for Britain is to participate in international structures, connected with Arctic. The article pays attention to the aspects that were not previously analyzed, such as: reasons of British interests in Arctic, bilateral and multilateral relationships between Britain and its partners, first of all, cooperation between Russia and Britain; British institutions; positive and negative aspects of British Arctic strategy; factors that have impact on its evolution, mainly EU and Scottish factors. The research allowed to make the conclusion that Britain does not have enough instruments to have a strong disposition in Arctic, though it plans to accelerate its participation in Arctic organizations. The article is based upon system and structural analysis.

  12. Wielandt method applied to the diffusion equations discretized by finite element nodal methods

    International Nuclear Information System (INIS)

    Mugica R, A.; Valle G, E. del

    2003-01-01

    Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)

  13. What is the method in applying formal methods to PLC applications?

    NARCIS (Netherlands)

    Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.

    2000-01-01

    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system

  14. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  15. The Wigner method applied to the photodissociation of CH3I

    DEFF Research Database (Denmark)

    Henriksen, Niels Engholm

    1985-01-01

    The Wigner method is applied to the Shapiro-Bersohn model of the photodissociation of CH3I. The partial cross sections obtained by this semiclassical method are in very good agreement with results of exact quantum calculations. It is also shown that a harmonic approximation to the vibrational...

  16. A new clamp method for firing bricks | Obeng | Journal of Applied ...

    African Journals Online (AJOL)

    A new clamp method for firing bricks. ... Journal of Applied Science and Technology ... To overcome this operational deficiencies, a new method of firing bricks that uses brick clamp technique that incorporates a clamp wall of 60 cm thickness, a six tier approach of sealing the top of the clamp (by combination of green bricks) ...

  17. Determination methods for plutonium as applied in the field of reprocessing

    International Nuclear Information System (INIS)

    1983-07-01

    The papers presented report on Pu-determination methods, which are routinely applied in process control, and also on new developments which could supercede current methods either because they are more accurate or because they are simpler and faster. (orig./DG) [de

  18. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  19. Water Permeability of Pervious Concrete Is Dependent on the Applied Pressure and Testing Methods

    Directory of Open Access Journals (Sweden)

    Yinghong Qin

    2015-01-01

    Full Text Available Falling head method (FHM and constant head method (CHM are, respectively, used to test the water permeability of permeable concrete, using different water heads on the testing samples. The results indicate the apparent permeability of pervious concrete decreasing with the applied water head. The results also demonstrate the permeability measured from the FHM is lower than that from the CHM. The fundamental difference between the CHM and FHM is examined from the theory of fluid flowing through porous media. The testing results suggest that the water permeability of permeable concrete should be reported with the applied pressure and the associated testing method.

  20. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Javier Cubas

    2015-01-01

    Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  1. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    Science.gov (United States)

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  2. Efficient Integration of Highly Eccentric Orbits by Scaling Methods Applied to Kustaanheimo-Stiefel Regularization

    Science.gov (United States)

    Fukushima, Toshio

    2004-12-01

    We apply our single scaling method to the numerical integration of perturbed two-body problems regularized by the Kustaanheimo-Stiefel (K-S) transformation. The scaling is done by multiplying a single scaling factor with the four-dimensional position and velocity vectors of an associated harmonic oscillator in order to maintain the Kepler energy relation in terms of the K-S variables. As with the so-called energy rectification of Aarseth, the extra cost for the scaling is negligible, since the integration of the Kepler energy itself is already incorporated in the original K-S formulation. On the other hand, the single scaling method can be applied at every integration step without facing numerical instabilities. For unperturbed cases, the single scaling applied at every step gives a better result than either the original K-S formulation, the energy rectification applied at every apocenter, or the single scaling method applied at every apocenter. For the perturbed cases, however, the single scaling method applied at every apocenter provides the best performance for all perturbation types, whether the main source of error is truncation or round-off.

  3. Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods

    Science.gov (United States)

    Maase, Eric L.; High, Karen A.

    2008-01-01

    "Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…

  4. Viewpoint: An Alternative Teaching Method. The WFTU Applies Active Methods to Educate Workers.

    Science.gov (United States)

    Courbe, Jean-Francois

    1989-01-01

    Develops a set of ideas and practices acquired from experience in organizing trade union education sessions. The method is based on observations that lecturing has not proved highly efficient, although traditional approaches--lecture, reading, discussion--are not totally rejected. (JOW)

  5. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    Science.gov (United States)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  6. Forensic chemistry: perspective of new analytical methods applied to documentoscopy, ballistic and drugs of abuse

    OpenAIRE

    Romão, Wanderson; Schwab, Nicolas V; Bueno, Maria Izabel M. S; Sparrapan, Regina; Eberlin, Marcos N; Martiny, Andrea; Sabino, Bruno D; Maldaner, Adriano O

    2011-01-01

    In this review recent methods developed and applied to solve criminal occurences related to documentoscopy, ballistic and drugs of abuse are discussed. In documentoscopy, aging of ink writings, the sequence of line crossings and counterfeiting of documents are aspects to be solved with reproducible, fast and non-destructive methods. In ballistic, the industries are currently producing ''lead-free'' or ''nontoxic'' handgun ammunitions, so new methods of gunshot residues characterization are be...

  7. Arctic climate tipping points.

    Science.gov (United States)

    Lenton, Timothy M

    2012-02-01

    There is widespread concern that anthropogenic global warming will trigger Arctic climate tipping points. The Arctic has a long history of natural, abrupt climate changes, which together with current observations and model projections, can help us to identify which parts of the Arctic climate system might pass future tipping points. Here the climate tipping points are defined, noting that not all of them involve bifurcations leading to irreversible change. Past abrupt climate changes in the Arctic are briefly reviewed. Then, the current behaviour of a range of Arctic systems is summarised. Looking ahead, a range of potential tipping phenomena are described. This leads to a revised and expanded list of potential Arctic climate tipping elements, whose likelihood is assessed, in terms of how much warming will be required to tip them. Finally, the available responses are considered, especially the prospects for avoiding Arctic climate tipping points.

  8. Apparatus and method for applying an end plug to a fuel rod tube end

    International Nuclear Information System (INIS)

    Rieben, S.L.; Wylie, M.E.

    1987-01-01

    An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described

  9. Method of levelized discounted costs applied in economic evaluation of nuclear power plant project

    International Nuclear Information System (INIS)

    Tian Li; Wang Yongqing; Liu Jingquan; Guo Jilin; Liu Wei

    2000-01-01

    The main methods of economic evaluation of bid which are in common use are introduced. The characteristics of levelized discounted cost method and its application are presented. The method of levelized discounted cost is applied to the cost calculation of a 200 MW nuclear heating reactor economic evaluation. The results indicate that the method of levelized discounted costs is simple, feasible and which is considered most suitable for the economic evaluation of various case. The method is suggested which is used in the national economic evaluation

  10. Computational problems in Arctic Research

    International Nuclear Information System (INIS)

    Petrov, I

    2016-01-01

    This article is to inform about main problems in the area of Arctic shelf seismic prospecting and exploitation of the Northern Sea Route: simulation of the interaction of different ice formations (icebergs, hummocks, and drifting ice floes) with fixed ice-resistant platforms; simulation of the interaction of icebreakers and ice- class vessels with ice formations; modeling of the impact of the ice formations on the underground pipelines; neutralization of damage for fixed and mobile offshore industrial structures from ice formations; calculation of the strength of the ground pipelines; transportation of hydrocarbons by pipeline; the problem of migration of large ice formations; modeling of the formation of ice hummocks on ice-resistant stationary platform; calculation the stability of fixed platforms; calculation dynamic processes in the water and air of the Arctic with the processing of data and its use to predict the dynamics of ice conditions; simulation of the formation of large icebergs, hummocks, large ice platforms; calculation of ridging in the dynamics of sea ice; direct and inverse problems of seismic prospecting in the Arctic; direct and inverse problems of electromagnetic prospecting of the Arctic. All these problems could be solved by up-to-date numerical methods, for example, using grid-characteristic method. (paper)

  11. Method to detect substances in a body and device to apply the method

    International Nuclear Information System (INIS)

    Voigt, H.

    1978-01-01

    The method and the measuring disposition serve to localize pellets doped with Gd 2 O 3 , lying between UO 2 pellets within a reactor fuel rod. The fuel rod is penetrating a homogeneous magnetic field generated between two pole shoes. The magnetic stray field caused by the doping substances is then measured by means of Hall probes (e.g. InAs) for quantitative discrimination from UO 2 . The position of the Gd 2 O 3 -doped pellets is determined by moving the fuel rod through the magnetic field in a direction perpendicular to the homogeneous field. The measuring signal is caused by the different susceptibility of Gd 2 O 3 with respect to UO 2 . (DG) [de

  12. Approaching a Postcolonial Arctic

    DEFF Research Database (Denmark)

    Jensen, Lars

    2016-01-01

    This article explores different postcolonially configured approaches to the Arctic. It begins by considering the Arctic as a region, an entity, and how the customary political science informed approaches are delimited by their focus on understanding the Arctic as a region at the service of the co......This article explores different postcolonially configured approaches to the Arctic. It begins by considering the Arctic as a region, an entity, and how the customary political science informed approaches are delimited by their focus on understanding the Arctic as a region at the service...... of the contemporary neoliberal order. It moves on to explore how different parts of the Arctic are inscribed in a number of sub-Arctic nation-state binds, focusing mainly on Canada and Denmark. The article argues that the postcolonial can be understood as a prism or a methodology that asks pivotal questions to all...... approaches to the Arctic. Yet the postcolonial itself is characterised by limitations, not least in this context its lack of interest in the Arctic, and its bias towards conventional forms of representation in art. The article points to the need to develop a more integrated critique of colonial and neo...

  13. Several methods applied to measuring residual stress in a known specimen

    International Nuclear Information System (INIS)

    Prime, M.B.; Rangaswamy, P.; Daymond, M.R.; Abelin, T.G.

    1998-01-01

    In this study, a beam with a precisely known residual stress distribution provided a unique experimental opportunity. A plastically bent beam was carefully prepared in order to provide a specimen with a known residual stress profile. 21Cr-6Ni-9Mn austenitic stainless steel was obtained as 43 mm square forged stock. Several methods were used to determine the residual stresses, and the results were compared to the known values. Some subtleties of applying the various methods were exposed

  14. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available M) with higher-order polynomial basis functions, and applied to a surface form of the electrical field integral equation, under thin wire approximation. The main advantage of the proposed method is in permitting to reduce the required number of unknowns when...

  15. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  16. 21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...

  17. Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)

    Science.gov (United States)

    Earl B. Anderson; R. Stanton Hales

    1986-01-01

    The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...

  18. Applying terminological methods and description logic for creating and implementing and ontology on inhibition

    DEFF Research Database (Denmark)

    Zambach, Sine; Madsen, Bodil Nistrup

    2009-01-01

    By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...

  19. [Alternative medicine methods applied in patients before surgical treatment of lumbar discopathy].

    Science.gov (United States)

    Rutkowska, E; Kamiński, S; Kucharczyk, A

    2001-01-01

    Case records of 200 patients operated on in 1998/99 for herniated lumbar disc in Neurosurgery Dept. showed that 95 patients (47.5%) had been treated previously by 148 alternative medical or non-medical procedures. The authors discuss the problem of non-conventional treatment methods applied for herniated lumbar disc by professionals or non professionals. The procedures are often dangerous.

  20. The Effect Of The Applied Performance Methods On The Objective Of The Managers

    Directory of Open Access Journals (Sweden)

    Derya Kara

    2009-09-01

    Full Text Available Within the changing management concept, employees and employers have the constant feeling of keeping up with the changing environment. In this regard, performance evaluation activities are regarded as an indispensable element. Data obtained from the results of the performance evaluation activities, shed light on the development of the employees and enable the enterprises to stand in the fierce competitive environment. This study sets out to find out the effect of the applied performance methods on the objective of the managers. The population of the study comprises 182 hotel enterprises with five stars operating in Antalya, İzmir and Muğla with 2184 managers. Sample population was comprised of 578 managers. The results of the study suggest that the effect of the applied performance methods on the objective of the managers counts. The objective of the managers applying 360-degree performance evaluation method is found to be “finding out the training and development needs”, while the objective of the managers applying conventional performance evaluation methods is found to be “enhancing the existing performance”.

  1. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  2. Absolute Geostrophic Velocity Inverted from the Polar Science Center Hydrographic Climatology (PHC3.0) of the Arctic Ocean with the P-Vector Method (NCEI Accession 0156425)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset (called PHC-V) comprises 3D gridded climatological fields of absolute geostrophic velocity of the Arctic Ocean inverted from the Polar science center...

  3. An applied study using systems engineering methods to prioritize green systems options

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sonya M [Los Alamos National Laboratory; Macdonald, John M [Los Alamos National Laboratory

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.

  4. Economic consequences assessment for scenarios and actual accidents do the same methods apply

    International Nuclear Information System (INIS)

    Brenot, J.

    1991-01-01

    Methods for estimating the economic consequences of major technological accidents, and their corresponding computer codes, are briefly presented with emphasis on the basic choices. When applied to hypothetic scenarios, those methods give results that are of interest for risk managers with a decision aiding perspective. Simultaneously the various costs, and the procedures for their estimation are reviewed for some actual accidents (Three Mile Island, Chernobyl,..). These costs are used in a perspective of litigation and compensation. The comparison of the methods used and cost estimates obtained for scenarios and actual accidents shows the points of convergence and discrepancies that are discussed

  5. Applying the Support Vector Machine Method to Matching IRAS and SDSS Catalogues

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2007-10-01

    Full Text Available This paper presents results of applying a machine learning technique, the Support Vector Machine (SVM, to the astronomical problem of matching the Infra-Red Astronomical Satellite (IRAS and Sloan Digital Sky Survey (SDSS object catalogues. In this study, the IRAS catalogue has much larger positional uncertainties than those of the SDSS. A model was constructed by applying the supervised learning algorithm (SVM to a set of training data. Validation of the model shows a good identification performance (∼ 90% correct, better than that derived from classical cross-matching algorithms, such as the likelihood-ratio method used in previous studies.

  6. A stochastic root finding approach: the homotopy analysis method applied to Dyson-Schwinger equations

    Science.gov (United States)

    Pfeffer, Tobias; Pollet, Lode

    2017-04-01

    We present the construction and stochastic summation of rooted-tree diagrams, based on the expansion of a root finding algorithm applied to the Dyson-Schwinger equations. The mathematical formulation shows superior convergence properties compared to the bold diagrammatic Monte Carlo approach and the developed algorithm allows one to tackle generic high-dimensional integral equations, to avoid the curse of dealing explicitly with high-dimensional objects and to access non-perturbative regimes. The sign problem remains the limiting factor, but it is not found to be worse than in other approaches. We illustrate the method for {φ }4 theory but note that it applies in principle to any model.

  7. Arctic wind energy

    International Nuclear Information System (INIS)

    Peltola, E.; Holttinen, H.; Marjaniemi, M.; Tammelin, B.

    1998-01-01

    Arctic wind energy research was aimed at adapting existing wind technologies to suit the arctic climatic conditions in Lapland. Project research work included meteorological measurements, instrument development, development of a blade heating system for wind turbines, load measurements and modelling of ice induced loads on wind turbines, together with the development of operation and maintenance practices in arctic conditions. As a result the basis now exists for technically feasible and economically viable wind energy production in Lapland. New and marketable products, such as blade heating systems for wind turbines and meteorological sensors for arctic conditions, with substantial export potential, have also been developed. (orig.)

  8. Arctic wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Peltola, E. [Kemijoki Oy (Finland); Holttinen, H.; Marjaniemi, M. [VTT Energy, Espoo (Finland); Tammelin, B. [Finnish Meteorological Institute, Helsinki (Finland)

    1998-12-31

    Arctic wind energy research was aimed at adapting existing wind technologies to suit the arctic climatic conditions in Lapland. Project research work included meteorological measurements, instrument development, development of a blade heating system for wind turbines, load measurements and modelling of ice induced loads on wind turbines, together with the development of operation and maintenance practices in arctic conditions. As a result the basis now exists for technically feasible and economically viable wind energy production in Lapland. New and marketable products, such as blade heating systems for wind turbines and meteorological sensors for arctic conditions, with substantial export potential, have also been developed. (orig.)

  9. Control Method for Electromagnetic Unmanned Robot Applied to Automotive Test Based on Improved Smith Predictor Compensator

    Directory of Open Access Journals (Sweden)

    Gang Chen

    2015-07-01

    Full Text Available A new control method for an electromagnetic unmanned robot applied to automotive testing (URAT and based on improved Smith predictor compensator, and considering a time delay, is proposed. The mechanical system structure and the control system structure are presented. The electromagnetic URAT adopts pulse width modulation (PWM control, while the displacement and the current doubles as a closed-loop control strategy. The coordinated control method of multiple manipulators for the electromagnetic URAT, e.g., a skilled human driver with intelligent decision-making ability is provided, and the improved Smith predictor compensator controller for the electromagnetic URAT considering a time delay is designed. Experiments are conducted using a Ford FOCUS automobile. Comparisons between the PID control method and the proposed method are conducted. Experimental results show that the proposed method can achieve the accurate tracking of the target vehicle's speed and reduce the mileage derivation of autonomous driving, which meets the requirements of national test standards.

  10. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    International Nuclear Information System (INIS)

    Hadj Salah, S.; Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-01

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude

  11. Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)

    1996-12-31

    The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.

  12. Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction

    Science.gov (United States)

    Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan

    2009-01-01

    Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and

  13. Agglomeration multigrid methods with implicit Runge-Kutta smoothers applied to aerodynamic simulations on unstructured grids

    Science.gov (United States)

    Langer, Stefan

    2014-11-01

    For unstructured finite volume methods an agglomeration multigrid with an implicit multistage Runge-Kutta method as a smoother is developed for solving the compressible Reynolds averaged Navier-Stokes (RANS) equations. The implicit Runge-Kutta method is interpreted as a preconditioned explicit Runge-Kutta method. The construction of the preconditioner is based on an approximate derivative. The linear systems are solved approximately with a symmetric Gauss-Seidel method. To significantly improve this solution method grid anisotropy is treated within the Gauss-Seidel iteration in such a way that the strong couplings in the linear system are resolved by tridiagonal systems constructed along these directions of strong coupling. The agglomeration strategy is adapted to this procedure by taking into account exactly these anisotropies in such a way that a directional coarsening is applied along these directions of strong coupling. Turbulence effects are included by a Spalart-Allmaras model, and the additional transport-type equation is approximately solved in a loosely coupled manner with the same method. For two-dimensional and three-dimensional numerical examples and a variety of differently generated meshes we show the wide range of applicability of the solution method. Finally, we exploit the GMRES method to determine approximate spectral information of the linearized RANS equations. This approximate spectral information is used to discuss and compare characteristics of multistage Runge-Kutta methods.

  14. Using an Explicit Emission Tagging Method in Global Modeling of Source-Receptor Relationships for Black Carbon in the Arctic: Variations, Sources and Transport Pathways

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hailong; Rasch, Philip J.; Easter, Richard C.; Singh, Balwinder; Zhang, Rudong; Ma, Po-Lun; Qian, Yun; Ghan, Steven J.; Beagley, Nathaniel

    2014-11-27

    We introduce an explicit emission tagging technique in the Community Atmosphere Model to quantify source-region-resolved characteristics of black carbon (BC), focusing on the Arctic. Explicit tagging of BC source regions without perturbing the emissions makes it straightforward to establish source-receptor relationships and transport pathways, providing a physically consistent and computationally efficient approach to produce a detailed characterization of the destiny of regional BC emissions and the potential for mitigation actions. Our analysis shows that the contributions of major source regions to the global BC burden are not proportional to the respective emissions due to strong region-dependent removal rates and lifetimes, while the contributions to BC direct radiative forcing show a near-linear dependence on their respective contributions to the burden. Distant sources contribute to BC in remote regions mostly in the mid- and upper troposphere, having much less impact on lower-level concentrations (and deposition) than on burden. Arctic BC concentrations, deposition and source contributions all have strong seasonal variations. Eastern Asia contributes the most to the wintertime Arctic burden. Northern Europe emissions are more important to both surface concentration and deposition in winter than in summer. The largest contribution to Arctic BC in the summer is from Northern Asia. Although local emissions contribute less than 10% to the annual mean BC burden and deposition within the Arctic, the per-emission efficiency is much higher than for major non-Arctic sources. The interannual variability (1996-2005) due to meteorology is small in annual mean BC burden and radiative forcing but is significant in yearly seasonal means over the Arctic. When a slow aging treatment of BC is introduced, the increase of BC lifetime and burden is source-dependent. Global BC forcing-per-burden efficiency also increases primarily due to changes in BC vertical distributions. The

  15. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  16. The 2D Spectral Intrinsic Decomposition Method Applied to Image Analysis

    Directory of Open Access Journals (Sweden)

    Samba Sidibe

    2017-01-01

    Full Text Available We propose a new method for autoadaptive image decomposition and recomposition based on the two-dimensional version of the Spectral Intrinsic Decomposition (SID. We introduce a faster diffusivity function for the computation of the mean envelope operator which provides the components of the SID algorithm for any signal. The 2D version of SID algorithm is implemented and applied to some very known images test. We extracted relevant components and obtained promising results in images analysis applications.

  17. Accuracy of the Adomian decomposition method applied to the Lorenz system

    International Nuclear Information System (INIS)

    Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.

    2006-01-01

    In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one

  18. Applying the Goal-Question-Indicator-Metric (GQIM) Method to Perform Military Situational Analysis

    Science.gov (United States)

    2016-05-11

    MAXIMUM 200 WORDS ) When developing situational awareness in support of military operations, the U.S. armed forces use a mnemonic, or memory aide, to...REV-03.18.2016.0 Applying the Goal- Question -Indicator- Metric (GQIM) Method to Perform Military Situational Analysis Douglas Gray May 2016...Acknowledgments The subject matter covered in this technical note evolved from an excellent question from Capt. Tomomi Ogasawara, Japan Ground Self

  19. Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.

    Energy Technology Data Exchange (ETDEWEB)

    Lestelle, Lawrence C.; Mobrand, Lars E.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.

  20. Applied ecosystem analysis - a primer; the ecosystem diagnosis and treatment method

    International Nuclear Information System (INIS)

    Lestelle, L.C.; Mobrand, L.E.; Lichatowich, J.A.; Vogel, T.S.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual

  1. White Arctic vs. Blue Arctic: Making Choices

    Science.gov (United States)

    Pfirman, S. L.; Newton, R.; Schlosser, P.; Pomerance, R.; Tremblay, B.; Murray, M. S.; Gerrard, M.

    2015-12-01

    As the Arctic warms and shifts from icy white to watery blue and resource-rich, tension is arising between the desire to restore and sustain an ice-covered Arctic and stakeholder communities that hope to benefit from an open Arctic Ocean. If emissions of greenhouse gases to the atmosphere continue on their present trend, most of the summer sea ice cover is projected to be gone by mid-century, i.e., by the time that few if any interventions could be in place to restore it. There are many local as well as global reasons for ice restoration, including for example, preserving the Arctic's reflectivity, sustaining critical habitat, and maintaining cultural traditions. However, due to challenges in implementing interventions, it may take decades before summer sea ice would begin to return. This means that future generations would be faced with bringing sea ice back into regions where they have not experienced it before. While there is likely to be interest in taking action to restore ice for the local, regional, and global services it provides, there is also interest in the economic advancement that open access brings. Dealing with these emerging issues and new combinations of stakeholders needs new approaches - yet environmental change in the Arctic is proceeding quickly and will force the issues sooner rather than later. In this contribution we examine challenges, opportunities, and responsibilities related to exploring options for restoring Arctic sea ice and potential pathways for their implementation. Negotiating responses involves international strategic considerations including security and governance, meaning that along with local communities, state decision-makers, and commercial interests, national governments will have to play central roles. While these issues are currently playing out in the Arctic, similar tensions are also emerging in other regions.

  2. A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale

    OpenAIRE

    Mircioiu, Constantin; Atkinson, Jeffrey

    2017-01-01

    A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple “yes” or “no” but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies fo...

  3. A method for finding the ridge between saddle points applied to rare event rate estimates

    DEFF Research Database (Denmark)

    Maronsson, Jon Bergmann; Jónsson, Hannes; Vegge, Tejs

    2012-01-01

    A method is presented for finding the ridge between first order saddle points on a multidimensional surface. For atomic scale systems, such saddle points on the energy surface correspond to atomic rearrangement mechanisms. Information about the ridge can be used to test the validity of the harmonic...... to the path. The method is applied to Al adatom diffusion on the Al(100) surface to find the ridge between 2-, 3- and 4-atom concerted displacements and hop mechanisms. A correction to the harmonic approximation of transition state theory was estimated by direct evaluation of the configuration integral along...

  4. Development of a tracking method for augmented reality applied to nuclear plant maintenance work

    International Nuclear Information System (INIS)

    Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu

    2005-01-01

    In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance

  5. Lessons learned applying CASE methods/tools to Ada software development projects

    Science.gov (United States)

    Blumberg, Maurice H.; Randall, Richard L.

    1993-01-01

    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.

  6. Method to integrate clinical guidelines into the electronic health record (EHR) by applying the archetypes approach.

    Science.gov (United States)

    Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro

    2013-01-01

    Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.

  7. Applying the response matrix method for solving coupled neutron diffusion and transport problems

    International Nuclear Information System (INIS)

    Sibiya, G.S.

    1980-01-01

    The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de

  8. Applying some methods to process the data coming from the nuclear reactions

    International Nuclear Information System (INIS)

    Suleymanov, M.K.; Abdinov, O.B.; Belashev, B.Z.

    2010-01-01

    Full text : The methods of a posterior increasing the resolution of the spectral lines are offered to process the data coming from the nuclear reactions. The methods have applied to process the data coming from the nuclear reactions at high energies. They give possibilities to get more detail information on a structure of the spectra of particles emitted in the nuclear reactions. The nuclear reactions are main source of the information on the structure and physics of the atomic nuclei. Usually the spectrums of the fragments of the reactions are complex ones. Apparently it is not simple to extract the necessary for investigation information. In the talk we discuss the methods of a posterior increasing the resolution of the spectral lines. The methods could be useful to process the complex data coming from the nuclear reactions. We consider the Fourier transformation method and maximum entropy one. The complex structures were identified by the method. One can see that at lest two selected points are indicated by the method. Recent we presented a talk where we shown that the results of the analyzing the structure of the pseudorapidity spectra of charged relativistic particles with ≥ 0.7 measured in Au+Em and Pb+Em at AGS and SPS energies using the Fourier transformation method and maximum entropy one. The dependences of these spectra on the number of fast target protons were studied. These distribution shown visually some plateau and shoulder that was at least three selected points on the distributions. The plateaus become wider in PbEm reactions. The existing of plateau is necessary for the parton models. The maximum entropy method could confirm the existing of the plateau and the shoulder on the distributions. The figure shows the results of applying the maximum entropy method. One can see that the method indicates several clean selected points. Some of them same with observed visually ones. We would like to note that the Fourier transformation method could not

  9. Arctic circulation regimes.

    Science.gov (United States)

    Proshutinsky, Andrey; Dukhovskoy, Dmitry; Timmermans, Mary-Louise; Krishfield, Richard; Bamber, Jonathan L

    2015-10-13

    Between 1948 and 1996, mean annual environmental parameters in the Arctic experienced a well-pronounced decadal variability with two basic circulation patterns: cyclonic and anticyclonic alternating at 5 to 7 year intervals. During cyclonic regimes, low sea-level atmospheric pressure (SLP) dominated over the Arctic Ocean driving sea ice and the upper ocean counterclockwise; the Arctic atmosphere was relatively warm and humid, and freshwater flux from the Arctic Ocean towards the subarctic seas was intensified. By contrast, during anticylonic circulation regimes, high SLP dominated driving sea ice and the upper ocean clockwise. Meanwhile, the atmosphere was cold and dry and the freshwater flux from the Arctic to the subarctic seas was reduced. Since 1997, however, the Arctic system has been under the influence of an anticyclonic circulation regime (17 years) with a set of environmental parameters that are atypical for this regime. We discuss a hypothesis explaining the causes and mechanisms regulating the intensity and duration of Arctic circulation regimes, and speculate how changes in freshwater fluxes from the Arctic Ocean and Greenland impact environmental conditions and interrupt their decadal variability. © 2015 The Authors.

  10. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  11. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  12. A methodological framework applied to the choice of the best method in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2009-01-01

    The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)

  13. Power secant method applied to natural frequency extraction of Timoshenko beam structures

    Directory of Open Access Journals (Sweden)

    C.A.N. Dias

    Full Text Available This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named "power deflation", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.

  14. Arctic Haze Analysis

    Science.gov (United States)

    Mei, Linlu; Xue, Yong

    2013-04-01

    The Arctic atmosphere is perturbed by nature/anthropogenic aerosol sources known as the Arctic haze, was firstly observed in 1956 by J. Murray Mitchell in Alaska (Mitchell, 1956). Pacyna and Shaw (1992) summarized that Arctic haze is a mixture of anthropogenic and natural pollutants from a variety of sources in different geographical areas at altitudes from 2 to 4 or 5 km while the source for layers of polluted air at altitudes below 2.5 km mainly comes from episodic transportation of anthropogenic sources situated closer to the Arctic. Arctic haze of low troposphere was found to be of a very strong seasonal variation characterized by a summer minimum and a winter maximum in Alaskan (Barrie, 1986; Shaw, 1995) and other Arctic region (Xie and Hopke, 1999). An anthropogenic factor dominated by together with metallic species like Pb, Zn, V, As, Sb, In, etc. and nature source such as sea salt factor consisting mainly of Cl, Na, and K (Xie and Hopke, 1999), dust containing Fe, Al and so on (Rahn et al.,1977). Black carbon and soot can also be included during summer time because of the mix of smoke from wildfires. The Arctic air mass is a unique meteorological feature of the troposphere characterized by sub-zero temperatures, little precipitation, stable stratification that prevents strong vertical mixing and low levels of solar radiations (Barrie, 1986), causing less pollutants was scavenged, the major revival pathway for particulates from the atmosphere in Arctic (Shaw, 1981, 1995; Heintzenberg and Larssen, 1983). Due to the special meteorological condition mentioned above, we can conclude that Eurasian is the main contributor of the Arctic pollutants and the strong transport into the Arctic from Eurasia during winter caused by the high pressure of the climatologically persistent Siberian high pressure region (Barrie, 1986). The paper intends to address the atmospheric characteristics of Arctic haze by comparing the clear day and haze day using different dataset

  15. Arctic Sea Level Reconstruction

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde

    Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea-level...

  16. Error diffusion method applied to design combined CSG-BSG element used in ICF driver

    Science.gov (United States)

    Zhang, Yixiao; Yao, Xin; Gao, Fuhua; Guo, Yongkang; Wang, Lei; Hou, Xi

    2006-08-01

    In the final optics assembly of Inertial Confinement Fusion (ICF) driver, Diffractive Optical Elements (DOEs) are applied to achieve some important functions, such as harmonic wave separation, beam sampling, beam smoothing and pulse compression etc. However, in order to optimize the system structure, decrease the energy loss and avoid the damage of laser induction or self-focusing effect, the number of elements used in the ICF system, especially in the final optics assembly, should be minimized. The multiple exposure method has been proposed, for this purpose, to fabricate BSG and CSG on one surface of a silica plate. But the multiple etch processes utilized in this method is complex and will introduce large alignment error. Error diffusion method that based on pulse-density modulation has been widely used in signal processing and computer generated hologram (CGH). In this paper, according to error diffusion method in CGH and partial coherent imaging theory, we present a new method to design coding mask of combine CSG-BSG element with error diffusion method. With the designed mask, only one exposure process is needed in fabricating combined element, which will greatly reduce the fabrication difficulty and avoid the alignment error introduced by multiple etch processes. We illustrate the designed coding mask for CSG-BSG element with this method and compare the intensity distribution of the spatial image in partial coherent imaging system with desired relief.

  17. Monitoring of the Polar Stratospheric Clouds formation and evolution in Antarctica in August 2007 during IPY with the MATCH method applied to lidar data

    Science.gov (United States)

    Montoux, Nadege; David, Christine; Klekociuk, Andrew; Pitts, Michael; di Liberto, Luca; Snels, Marcel; Jumelet, Julien; Bekki, Slimane; Larsen, Niels

    2010-05-01

    The project ORACLE-O3 ("Ozone layer and UV RAdiation in a changing CLimate Evaluated during IPY") is one of the coordinated international proposals selected for the International Polar Year (IPY). As part of this global project, LOLITA-PSC ("Lagrangian Observations with Lidar Investigations and Trajectories in Antarctica and Arctic, of PSC") is devoted to Polar Stratospheric Clouds (PSC) studies. Indeed, understanding the formation and evolution of PSC is an important issue to quantify the impact of climate changes on their frequency of formation and, further, on chlorine activation and subsequent ozone depletion. In this framework, three lidar stations performed PSC observations in Antarctica during the 2006, 2007, and 2008 winters: Davis (68.58°S, 77.97°E), McMurdo (77.86°S, 166.48°E) and Dumont D'Urville (66.67°S, 140.01°E). The data are completed with the lidar data from CALIOP ("Cloud-Aerosol Lidar with Orthogonal Polarization") onboard the CALIPSO ("Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation") satellite. Lagrangian trajectory calculations are used to identify air masses with PSCs sounded by several ground-based lidar stations with the same method, called MATCH, applied for the first time in Arctic to study the ozone depletion with radiosoundings. The evolution of the optical properties of the PSCs and thus the type of PSCs formed (supercooled ternary solution, nitric acid trihydrate particles or ice particles) could thus be linked to the thermodynamical evolution of the air mass deduced from the trajectories. A modeling with the microphysical model of the Danish Meteorological Institute allows assessing our ability to predict PSCs for various environmental conditions. Indeed, from pressure and temperature evolution, the model allows retrieving the types of particles formed as well as their mean radii, their concentrations and could also simulate the lidar signals. In a first step, a case in August 2007 around 17-18 km, involving

  18. The effect of misleading surface temperature estimations on the sensible heat fluxes at a high Arctic site – the Arctic Turbulence Experiment 2006 on Svalbard (ARCTEX-2006

    Directory of Open Access Journals (Sweden)

    J. Lüers

    2010-01-01

    Full Text Available The observed rapid climate warming in the Arctic requires improvements in permafrost and carbon cycle monitoring, accomplished by setting up long-term observation sites with high-quality in-situ measurements of turbulent heat, water and carbon fluxes as well as soil physical parameters in Arctic landscapes. But accurate quantification and well adapted parameterizations of turbulent fluxes in polar environments presents fundamental problems in soil-snow-ice-vegetation-atmosphere interaction studies. One of these problems is the accurate estimation of the surface or aerodynamic temperature T(0 required to force most of the bulk aerodynamic formulae currently used. Results from the Arctic-Turbulence-Experiment (ARCTEX-2006 performed on Svalbard during the winter/spring transition 2006 helped to better understand the physical exchange and transport processes of energy. The existence of an atypical temperature profile close to the surface in the Arctic spring at Svalbard could be proven to be one of the major issues hindering estimation of the appropriate surface temperature. Thus, it is essential to adjust the set-up of measurement systems carefully when applying flux-gradient methods that are commonly used to force atmosphere-ocean/land-ice models. The results of a comparison of different sensible heat-flux parameterizations with direct measurements indicate that the use of a hydrodynamic three-layer temperature-profile model achieves the best fit and reproduces the temporal variability of the surface temperature better than other approaches.

  19. Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de

    2003-01-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  20. Single trial EEG classification applied to a face recognition experiment using different feature extraction methods.

    Science.gov (United States)

    Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei

    2015-01-01

    Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition.

  1. A reflective lens: applying critical systems thinking and visual methods to ecohealth research.

    Science.gov (United States)

    Cleland, Deborah; Wyborn, Carina

    2010-12-01

    Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.

  2. A note on the accuracy of spectral method applied to nonlinear conservation laws

    Science.gov (United States)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  3. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  4. Finite volume and finite element methods applied to 3D laminar and turbulent channel flows

    Science.gov (United States)

    Louda, Petr; Sváček, Petr; Kozel, Karel; Příhoda, Jaromír

    2014-12-01

    The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.

  5. Finite volume and finite element methods applied to 3D laminar and turbulent channel flows

    Energy Technology Data Exchange (ETDEWEB)

    Louda, Petr; Příhoda, Jaromír [Institute of Thermomechanics, Czech Academy of Sciences, Prague (Czech Republic); Sváček, Petr; Kozel, Karel [Czech Technical University in Prague, Fac. of Mechanical Engineering (Czech Republic)

    2014-12-10

    The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.

  6. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia

  7. An Effective Method on Applying Feedback Error Learning Scheme to Functional Electrical Stimulation Controller

    Science.gov (United States)

    Watanabe, Takashi; Kurosawa, Kenji; Yoshizawa, Makoto

    A Feedback Error Learning (FEL) scheme was found to be applicable to joint angle control by Functional Electrical Stimulation (FES) in our previous study. However, the FEL-FES controller had a problem in learning of the inverse dynamics model (IDM) in some cases. In this paper, methods of applying the FEL to FES control were examined in controlling 1-DOF movement of the wrist joint stimulating 2 muscles through computer simulation under several control conditions with several subject models. The problems in applying FEL to FES controller were suggested to be in restricting stimulation intensity to positive values between the minimum and the maximum intensities and in the case of very small output values of the IDM. Learning of the IDM was greatly improved by considering the IDM output range with setting the minimum ANN output value in calculating ANN connection weight change.

  8. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    Science.gov (United States)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  9. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied

  10. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.

  11. Arctic Basemaps In Google Maps

    DEFF Research Database (Denmark)

    Muggah, J.; Mioc, Darka

    2010-01-01

    the advantages of the use of Google Maps, to display the OMG's Arctic data. The map should should load the large Artic dataset in a reasonable time. The bathymetric images were created using software in Linux written by the OMG, and a step-by-step process was used to create images from the multibeam data...... collected by the OMG in the Arctic. The website was also created using Linux operating system. The projection needed to be changed from Lambert Conformal Conic (useful at higher Latitudes) to Mercator (used by Google Maps) and the data needed to have a common colour scheme. After creating and testing...... a prototype website using Google Ground overlay and Tile overlay, it was determined that the high resolution images (10m) were loading very slowly and the ground overlay method would not be useful for displaying the entire dataset. Therefore the Tile overlays were selected to be used within Google Maps. Tile...

  12. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-02-18

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  13. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2016-02-01

    Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  14. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  15. Projector methods applied to numerical integration of the SN transport equation

    International Nuclear Information System (INIS)

    Hristea, V.; Covaci, St.

    2003-01-01

    We are developing two methods of integration for the S N transport equation in x - y geometry, methods based on projector technique. By cellularization of the phase space and by choosing a finite basis of orthogonal functions, which characterize the angular flux, the non-selfadjoint transport equation is reduced to a cellular automaton. This automaton is completely described by the transition Matrix T. Within this paper two distinct methods of projection are described. One of them uses the transversal integration technique. As an alternative to this we applied the method of the projectors for the integral S N transport equation. We show that the constant spatial approximation of the integral S N transport equation does not lead to negative fluxes. One of the problems with the projector method, namely the appearance of numerical instability for small intervals is solved by the Pade representation of the elements for Matrix T. Numerical tests here presented compare the numerical performances of the algorithms obtained by the two projection methods. The Pade representation was also taken into account for these two algorithm types. (authors)

  16. Efficient combination of acceleration techniques applied to high frequency methods for solving radiation and scattering problems

    Science.gov (United States)

    Lozano, Lorena; Algar, Ma Jesús; García, Eliseo; González, Iván; Cátedra, Felipe

    2017-12-01

    An improved ray-tracing method applied to high-frequency techniques such as the Uniform Theory of Diffraction (UTD) is presented. The main goal is to increase the speed of the analysis of complex structures while considering a vast number of observation directions and taking into account multiple bounces. The method is based on a combination of the Angular Z-Buffer (AZB), the Space Volumetric Partitioning (SVP) algorithm and the A∗ heuristic search method to treat multiple bounces. In addition, a Master Point strategy was developed to analyze efficiently a large number of Near-Field points or Far-Field directions. This technique can be applied to electromagnetic radiation problems, scattering analysis, propagation at urban or indoor environments and to the mutual coupling between antennas. Due to its efficiency, its application is suitable to study large antennas radiation patterns and even its interactions with complex environments, including satellites, ships, aircrafts, cities or another complex electrically large bodies. The new technique appears to be extremely efficient at these applications even when considering multiple bounces.

  17. Methodical basis of training of cadets for the military applied heptathlon competitions

    Directory of Open Access Journals (Sweden)

    R.V. Anatskyi

    2017-12-01

    Full Text Available The purpose of the research is to develop methodical bases of training of cadets for the military applied heptathlon competitions. Material and methods: Cadets of 2-3 courses at the age of 19-20 years (n=20 participated in researches. Cadets were selected by the best results of exercises performing included into the program of military applied heptathlon competitions (100 m run, 50 m freestyle swimming, Kalashnikov rifle shooting, pull-up, obstacle course, grenade throwing, 3000 m run. Preparation took place on the basis of training center. All trainings were organized and carried out according to the methodical basics: in a week preparation microcycle five days cadets had two trainings a day (on Saturday was one training, on Sunday they had rest. The selected exercises with individual loads were performed, Results : Sport scores demonstrated top results in the performance of 100 m run, 3000 m run and pull-up. The indices of performing exercise "obstacle course" were much lower than expected. Rather low results were demonstrated in swimming and shooting. Conclusions . Results of researches indicate the necessity of quality improvement: cadets’ weapons proficiency; physical readiness to perform the exercises requiring complex demonstration of all physical qualities.

  18. Symanzik's method applied to fractional quantum Hall edge states

    Energy Technology Data Exchange (ETDEWEB)

    Blasi, A.; Ferraro, D.; Maggiore, N.; Magnoli, N. [Dipartimento di Fisica, Universita di Genova (Italy); LAMIA-INFM-CNR, Genova (Italy); Sassetti, M.

    2008-11-15

    The method of separability, introduced by Symanzik, is applied in order to describe the effect of a boundary for a fractional quantum Hall liquid in the Laughlin series. An Abelian Chern-Simons theory with plane boundary is considered and the Green functions both in the bulk and on the edge are constructed, following a rigorous, perturbative, quantum field theory treatment. We show that the conserved boundary currents find an explicit interpretation in terms of the continuity equation with the electron density satisfying the Tomonaga-Luttinger commutation relation. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  19. Method for applying a photoresist layer to a substrate having a preexisting topology

    Science.gov (United States)

    Morales, Alfredo M.; Gonzales, Marcela

    2004-01-20

    The present invention describes a method for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the applied photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.

  20. The Trojan Horse Method Applied to the Astrophysically Relevant Proton Capture Reactions on Li Isotopes

    Science.gov (United States)

    Tumino, A.; Spitaleri, C.; Musumarra, A.; Pellegriti, M. G.; Pizzone, R. G.; Rinollo, A.; Romano, S.; Pappalardo, L.; Bonomo, C.; Del Zoppo, A.; Di Pietro, A.; Figuera, P.; La Cognata, M.; Lamia, L.; Cherubini, S.; Rolfs, C.; Typel, S.

    2005-12-01

    The 7Li(p,α)4He 6Li(d,α)4He and 6Li(p,α)3He reactions was performed and studied in the framework of the Trojan Horse Method applied to the d(7Li,αα)n, 6Li(6Li,αα)4He and d(6Li,α3He)n three-body reactions respectively. Their bare astrophysical S-factors were extracted and from the comparison with the behavior of the screened direct data, an independent estimate of the screening potential was obtained.

  1. Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction

    Directory of Open Access Journals (Sweden)

    Heng Luo,

    2011-01-01

    Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.

  2. The nature of spatial transitions in the Arctic.

    Science.gov (United States)

    H. E. Epstein; J. Beringer; W. A. Gould; A. H. Lloyd; C. D. Thompson; F. S. Chapin III; G. J. Michaelson; C. L. Ping; T. S. Rupp; D. A. Walker

    2004-01-01

    Aim Describe the spatial and temporal properties of transitions in the Arctic and develop a conceptual understanding of the nature of these spatial transitions in the face of directional environmental change. Location Arctic tundra ecosystems of the North Slope of Alaska and the tundraforest region of the Seward Peninsula, Alaska. Methods We synthesize information from...

  3. An energy efficient building for the Arctic climate

    DEFF Research Database (Denmark)

    Vladyková, Petra

    the fundamental definition of a passive house in the Arctic and therefore to save the cost of traditional heating, but that would incur high costs for the building materials and the provision of technical solutions of extremely high standards which would take too many years to pay back in the life time...... usage of an extreme energy efficient building in the Arctic. The purpose of this Ph.D. study is to determine the optimal use of an energy efficient house in the Arctic derived from the fundamental definition of a passive house, investigations of building parameters including the building envelope...... of a building. The fundamental definition which applies to all climates can be realized in the Arctic regions at very high costs using fundamental design values and the building technologies available in the Arctic. Based on the investigations, the optimal energy performing building is derived from a passive...

  4. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    Science.gov (United States)

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  5. A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale.

    Science.gov (United States)

    Mircioiu, Constantin; Atkinson, Jeffrey

    2017-05-10

    A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple "yes" or "no" but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with "large" (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 "very important" and 4 "essential"), a "score analysis" based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses.

  6. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    Science.gov (United States)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  7. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  8. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  9. Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method

    International Nuclear Information System (INIS)

    Sohrabi, M.; Soltani, Z.

    2016-01-01

    Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6  tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6  alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method

  10. Arctic ice management

    Science.gov (United States)

    Desch, Steven J.; Smith, Nathan; Groppi, Christopher; Vargas, Perry; Jackson, Rebecca; Kalyaan, Anusha; Nguyen, Peter; Probst, Luke; Rubin, Mark E.; Singleton, Heather; Spacek, Alexander; Truitt, Amanda; Zaw, Pye Pye; Hartnett, Hilairy E.

    2017-01-01

    As the Earth's climate has changed, Arctic sea ice extent has decreased drastically. It is likely that the late-summer Arctic will be ice-free as soon as the 2030s. This loss of sea ice represents one of the most severe positive feedbacks in the climate system, as sunlight that would otherwise be reflected by sea ice is absorbed by open ocean. It is unlikely that CO2 levels and mean temperatures can be decreased in time to prevent this loss, so restoring sea ice artificially is an imperative. Here we investigate a means for enhancing Arctic sea ice production by using wind power during the Arctic winter to pump water to the surface, where it will freeze more rapidly. We show that where appropriate devices are employed, it is possible to increase ice thickness above natural levels, by about 1 m over the course of the winter. We examine the effects this has in the Arctic climate, concluding that deployment over 10% of the Arctic, especially where ice survival is marginal, could more than reverse current trends of ice loss in the Arctic, using existing industrial capacity. We propose that winter ice thickening by wind-powered pumps be considered and assessed as part of a multipronged strategy for restoring sea ice and arresting the strongest feedbacks in the climate system.

  11. A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED

    DEFF Research Database (Denmark)

    2017-01-01

    The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....

  12. Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations

    Science.gov (United States)

    Atkins, Harold L.

    2009-01-01

    The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.

  13. SANS contrast variation method applied in experiments on ferrofluids at MURN instrument of IBR-2 reactor

    Science.gov (United States)

    Balasoiu, Maria; Kuklin, Alexander

    2012-03-01

    Separate determination of the nuclear and magnetic contributions to the scattering intensity by means of a contrast variation method applied in a small angle neutron scattering experiment of nonpolarized neutrons in ferrofluids in early 90 's at the MURN instrument is reviewed. The nuclear scattering contribution gives the features of the colloidal particle dimensions, surfactant shell structure and the solvent degree penetration to the macromolecular layer. The magnetic scattering part is compatible to the models where is supposed that the particle surface has a nonmagnetic layer. Details on experimental "Grabcev method" in obtaining separate nuclear and magnetic contributions to the small angle neutron scattering intensity of unpolarized neutrons are emphasized for the case of a high quality ultrastabile benzene-based ferrofluid with magnetite nanoparticles.

  14. SANS contrast variation method applied in experiments on ferrofluids at MURN instrument of IBR-2 reactor

    International Nuclear Information System (INIS)

    Balasoiu, Maria; Kuklin, Alexander

    2012-01-01

    Separate determination of the nuclear and magnetic contributions to the scattering intensity by means of a contrast variation method applied in a small angle neutron scattering experiment of nonpolarized neutrons in ferrofluids in early 90 's at the MURN instrument is reviewed. The nuclear scattering contribution gives the features of the colloidal particle dimensions, surfactant shell structure and the solvent degree penetration to the macromolecular layer. The magnetic scattering part is compatible to the models where is supposed that the particle surface has a nonmagnetic layer. Details on experimental 'Grabcev method' in obtaining separate nuclear and magnetic contributions to the small angle neutron scattering intensity of unpolarized neutrons are emphasized for the case of a high quality ultrastabile benzene-based ferrofluid with magnetite nanoparticles.

  15. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    Energy Technology Data Exchange (ETDEWEB)

    Missirlian, M. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France)], E-mail: marc.missirlian@cea.fr; Traxler, H. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria); Boscary, J. [Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Boltzmannstr. 2, D-85748 Garching (Germany); Durocher, A.; Escourbiac, F.; Schlosser, J. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France); Schedler, B.; Schuler, P. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria)

    2007-10-15

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application.

  16. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Traxler, H.; Boscary, J.; Durocher, A.; Escourbiac, F.; Schlosser, J.; Schedler, B.; Schuler, P.

    2007-01-01

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application

  17. Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods

    Directory of Open Access Journals (Sweden)

    Heide Lukosch

    2018-03-01

    Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation.  The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.

  18. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    Science.gov (United States)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  19. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    Science.gov (United States)

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  20. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    Science.gov (United States)

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  1. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  2. Labile soil phosphorus as influenced by methods of applying radioactive phosphorus

    International Nuclear Information System (INIS)

    Selvaratnam, V.V.; Andersen, A.J.; Thomsen, J.D.; Gissel-Nielsen, G.

    1980-03-01

    The influence of different methods of applying radioactive phosphorus on the E- and L-values was studied in four foil types using barley, buckwheat, and rye grass for the L-value determination. The four soils differed greatly in their E- and L-values. The experiment was carried out both with and without carrier-P. The presence of carrier-P had no influence on the E-values, while carrier-P in some cases gave a lower L-value. Both E- and L-values dependent on the method of application. When the 32 P was applied on a small soil or sand sample and dried before mixing with the total amount of soil, the E-values were higher than at direct application most likely because of a stronger fixation to the soil/sand particles. This was not the case for the L-values that are based on a much longer equilibrium time. On the contrary, the direct application of the 32 p-solution to the whole amount of soil gave higher L-values of a non-homogeneous distribution of the 32 p in the soil. (author)

  3. Analysis of coupled neutron-gamma radiations, applied to shieldings in multigroup albedo method

    International Nuclear Information System (INIS)

    Dunley, Leonardo Souza

    2002-01-01

    The principal mathematical tools frequently available for calculations in Nuclear Engineering, including coupled neutron-gamma radiations shielding problems, involve the full Transport Theory or the Monte Carlo techniques. The Multigroup Albedo Method applied to shieldings is characterized by following the radiations through distinct layers of materials, allowing the determination of the neutron and gamma fractions reflected from, transmitted through and absorbed in the irradiated media when a neutronic stream hits the first layer of material, independently of flux calculations. Then, the method is a complementary tool of great didactic value due to its clarity and simplicity in solving neutron and/or gamma shielding problems. The outstanding results achieved in previous works motivated the elaboration and the development of this study that is presented in this dissertation. The radiation balance resulting from the incidence of a neutronic stream into a shielding composed by 'm' non-multiplying slab layers for neutrons was determined by the Albedo method, considering 'n' energy groups for neutrons and 'g' energy groups for gammas. It was taken into account there is no upscattering of neutrons and gammas. However, it was considered that neutrons from any energy groups are able to produce gammas of all energy groups. The ANISN code, for an angular quadrature order S 2 , was used as a standard for comparison of the results obtained by the Albedo method. So, it was necessary to choose an identical system configuration, both for ANISN and Albedo methods. This configuration was six neutron energy groups and eight gamma energy groups, using three slab layers (iron aluminum - manganese). The excellent results expressed in comparative tables show great agreement between the values determined by the deterministic code adopted as standard and, the values determined by the computational program created using the Albedo method and the algorithm developed for coupled neutron

  4. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.

    2017-12-05

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  5. The Cn method applied to problems with an anisotropic diffusion law

    International Nuclear Information System (INIS)

    Grandjean, P.M.

    A 2-dimensional Cn calculation has been applied to homogeneous media subjected to the Rayleigh impact law. Results obtained with collision probabilities and Chandrasekhar calculations are compared to those from Cn method. Introducing in the expression of the transport equation, an expansion truncated on a polynomial basis for the outgoing angular flux (or possibly entrance flux) gives two Cn systems of algebraic linear equations for the expansion coefficients. The matrix elements of these equations are the moments of the Green function in infinite medium. The search for the Green function is effected through the Fourier transformation of the integrodifferential equation and its moments are derived from their Fourier transforms through a numerical integration in the complex plane. The method has been used for calculating the albedo in semi-infinite media, the extrapolation length of the Milne problem, and the albedo and transmission factor of a slab (a concise study of convergence is presented). A system of integro-differential equations bearing on the moments of the angular flux inside the medium has been derived, for the collision probability method. It is numerically solved with approximately the bulk flux by step functions. The albedo in semi-infinite medium has also been computed through the semi-analytical Chandrasekhar method. In the latter, the outgoing flux is expressed as a function of the entrance flux by means of a integral whose kernel is numerically derived [fr

  6. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  7. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  8. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    Directory of Open Access Journals (Sweden)

    Laura Susana Vargas-Valencia

    2016-12-01

    Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  9. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    Science.gov (United States)

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  10. Pairwise comparisons to reconstruct mean temperature in the Arctic Atlantic Region over the last 2,000 years

    Science.gov (United States)

    Hanhijärvi, Sami; Tingley, Martin P.; Korhola, Atte

    2013-10-01

    Existing multi-proxy climate reconstruction methods assume the suitably transformed proxy time series are linearly related to the target climate variable, which is likely a simplifying assumption for many proxy records. Furthermore, with a single exception, these methods face problems with varying temporal resolutions of the proxy data. Here we introduce a new reconstruction method that uses the ordering of all pairs of proxy observations within each record to arrive at a consensus time series that best agrees with all proxy records. The resulting unitless composite time series is subsequently calibrated to the instrumental record to provide an estimate of past climate. By considering only pairwise comparisons, this method, which we call PaiCo, facilitates the inclusion of records with differing temporal resolutions, and relaxes the assumption of linearity to the more general assumption of a monotonically increasing relationship between each proxy series and the target climate variable. We apply PaiCo to a newly assembled collection of high-quality proxy data to reconstruct the mean temperature of the Northernmost Atlantic region, which we call Arctic Atlantic, over the last 2,000 years. The Arctic Atlantic is a dynamically important region known to feature substantial temperature variability over recent millennia, and PaiCo allows for a more thorough investigation of the Arctic Atlantic regional climate as we include a diverse array of terrestrial and marine proxies with annual to multidecadal temporal resolutions. Comparisons of the PaiCo reconstruction to recent reconstructions covering larger areas indicate greater climatic variability in the Arctic Atlantic than for the Arctic as a whole. The Arctic Atlantic reconstruction features temperatures during the Roman Warm Period and Medieval Climate Anomaly that are comparable or even warmer than those of the twentieth century, and coldest temperatures in the middle of the nineteenth century, just prior to the onset

  11. Applying of whole-tree harvesting method; Kokopuujuontomenetelmaen soveltaminen aines- ja energiapuun hankintaan

    Energy Technology Data Exchange (ETDEWEB)

    Vesisenaho, T. [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S. [VTT Manufacturing Technology, Espoo (Finland)

    1997-12-01

    The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed

  12. Brucellosis Prevention Program: Applying “Child to Family Health Education” Method

    Directory of Open Access Journals (Sweden)

    H. Allahverdipour

    2010-04-01

    Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.

  13. Study on vibration characteristics and fault diagnosis method of oil-immersed flat wave reactor in Arctic area converter station

    Science.gov (United States)

    Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang

    2017-10-01

    Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.

  14. Evaluation of cleaning methods applied in home environments after renovation and remodeling activities

    International Nuclear Information System (INIS)

    Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.

    2004-01-01

    We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that

  15. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  16. Live from the Arctic

    Science.gov (United States)

    Warnick, W. K.; Haines-Stiles, G.; Warburton, J.; Sunwood, K.

    2003-12-01

    For reasons of geography and geophysics, the poles of our planet, the Arctic and Antarctica, are places where climate change appears first: they are global canaries in the mine shaft. But while Antarctica (its penguins and ozone hole, for example) has been relatively well-documented in recent books, TV programs and journalism, the far North has received somewhat less attention. This project builds on and advances what has been done to date to share the people, places, and stories of the North with all Americans through multiple media, over several years. In a collaborative project between the Arctic Research Consortium of the United States (ARCUS) and PASSPORT TO KNOWLEDGE, Live from the Arctic will bring the Arctic environment to the public through a series of primetime broadcasts, live and taped programming, interactive virtual field trips, and webcasts. The five-year project will culminate during the 2007-2008 International Polar Year (IPY). Live from the Arctic will: A. Promote global understanding about the value and world -wide significance of the Arctic, B. Bring cutting-edge research to both non-formal and formal education communities, C. Provide opportunities for collaboration between arctic scientists, arctic communities, and the general public. Content will focus on the following four themes. 1. Pan-Arctic Changes and Impacts on Land (i.e. snow cover; permafrost; glaciers; hydrology; species composition, distribution, and abundance; subsistence harvesting) 2. Pan-Arctic Changes and Impacts in the Sea (i.e. salinity, temperature, currents, nutrients, sea ice, marine ecosystems (including people, marine mammals and fisheries) 3. Pan-Arctic Changes and Impacts in the Atmosphere (i.e. precipitation and evaporation; effects on humans and their communities) 4. Global Perspectives (i.e. effects on humans and communities, impacts to rest of the world) In The Earth is Faster Now, a recent collection of comments by members of indigenous arctic peoples, arctic

  17. The Arctic Turn

    DEFF Research Database (Denmark)

    Rahbek-Clemmensen, Jon

    2018-01-01

    , a few states – Canada, Denmark, and the United States – sent other representatives. There was nothing unusual about the absence of Per Stig Møller, the Danish foreign minister – a Danish foreign minister had only once attended an Arctic Council ministerial meeting (Arctic Council 2016). Møller......In October 2006, representatives of the Arctic governments met in Salekhard in northern Siberia for the biennial Arctic Council ministerial meeting to discuss how the council could combat regional climate change, among other issues. While most capitals were represented by their foreign minister...... and Greenlandic affairs had mainly been about managing fishing quotas. Though crucial for Danish-Greenlandic relations, such issues were hardly top priorities for Her Majesty’s Foreign Service....

  18. Postgraduate Education in Quality Improvement Methods: Initial Results of the Fellows' Applied Quality Training (FAQT) Curriculum.

    Science.gov (United States)

    Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp

    2016-06-01

    Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.

  19. High-Resolution Seismics Methods Applied to Till Covered Hard Rock Environments

    International Nuclear Information System (INIS)

    Bergman, Bjoern

    2005-01-01

    Reflection seismic and seismic tomography methods can be used to image the upper kilometer of hard bedrock and the loose unconsolidated sediments covering it. Developments of these two methods and their application, as well as identifying issues concerning their usage, are the main focus of the thesis. Data used for this development were acquired at three different sites in Sweden, in Forsmark 140 km north of Stockholm, in the Oskarshamn area in southern Sweden, and in the northern part of the Siljan Ring impact crater area. The reflection seismic data were acquired with long source-receiver offsets relative to some of the targeted depths to be imaged. In the initial processing standard steps were applied, but the uppermost part of the sections were not always clear. The longer offsets imply that pre-stack migration is necessary in order to image the uppermost bedrock as clearly as possible. Careful choice of filters and velocity functions improve the pre-stack migrated image, allowing better correlation with near-surface geological information. The seismic tomography method has been enhanced to calculate, simultaneously with the velocity inversion, optimal corrections to the picked first break travel times in order to compensate for the delays due to the seismic waves passing through the loose sediments covering the bedrock. The reflection seismic processing used in this thesis has produced high-quality images of the upper kilometers, and in one example from the Forsmark site, the image of the uppermost 250 meters of the bedrock has been improved. The three-dimensional orientation of reflections has been determined at the Oskarshamn site. Correlation with borehole data shows that many of these reflections originate from fracture zones. The developed seismic tomography method produces high-detail velocity models for the site in the Siljan impact area and for the Forsmark site. In Forsmark, detailed estimates of the bedrock topography were calculated with the use of

  20. Applying system engineering methods to site characterization research for nuclear waste repositories

    International Nuclear Information System (INIS)

    Woods, T.W.

    1985-01-01

    Nuclear research and engineering projects can benefit from the use of system engineering methods. This paper is brief overview illustrating how system engineering methods could be applied in structuring a site characterization effort for a candidate nuclear waste repository. System engineering is simply an orderly process that has been widely used to transform a recognized need into a fully defined system. Such a system may be physical or abstract, natural or man-made, hardware or procedural, as is appropriate to the system's need or objective. It is a way of mentally visualizing all the constituent elements and their relationships necessary to fulfill a need, and doing so compliant with all constraining requirements attendant to that need. Such a system approach provides completeness, order, clarity, and direction. Admittedly, system engineering can be burdensome and inappropriate for those project objectives having simple and familiar solutions that are easily held and controlled mentally. However, some type of documented and structured approach is needed for those objectives that dictate extensive, unique, or complex programs, and/or creation of state-of-the-art machines and facilities. System engineering methods have been used extensively and successfully in these cases. The scientific methods has served well in ordering countless technical undertakings that address a specific question. Similarly, conventional construction and engineering job methods will continue to be quite adequate to organize routine building projects. Nuclear waste repository site characterization projects involve multiple complex research questions and regulatory requirements that interface with each other and with advanced engineering and subsurface construction techniques. There is little doubt that system engineering is an appropriate orchestrating process to structure such diverse elements into a cohesive, well defied project

  1. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    Science.gov (United States)

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  2. Photonic simulation method applied to the study of structural color in Myxomycetes.

    Science.gov (United States)

    Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia

    2012-07-02

    We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved.

  3. Impact of gene patents on diagnostic testing: a new patent landscaping method applied to spinocerebellar ataxia.

    Science.gov (United States)

    Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui

    2011-11-01

    Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called 'gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing.

  4. IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  5. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    Science.gov (United States)

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.

  6. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  7. Improvement in the DTVG detection method as applied to cast austeno-ferritic steels

    International Nuclear Information System (INIS)

    Francois, D.

    1996-05-01

    Initially, the so-called DTVG method was developed to improve detection and (lengthwise) dimensioning of cracks in austenitic steel assembly welds. The results obtained during the study and the structural similarity between austenitic and austeno-ferritic steels led us to carry out research into adapting the method on a sample the material of which is representative of the cast steels used in PWR primary circuit bends. The method was first adapted for use on thick-wall cast austeno-ferritic steel structures and was validated for zero ultrasonic beam incidence and for a flat sample with machine-finished reflectors. A second study was carried out notably to allow for non-zero ultrasonic beam incidence and to look at the method's validity when applied to a non-flat geometry. There were three principal goals to the research; adapting the process to take into account the special case of oblique ultrasonic beam incidence (B image handling), examining the effect of non-flat geometry on the detection method, and evaluating the performance of the method on actual defects (shrinkage cavities). We began by focusing on solving the problem of oblique incidence. Having decided on automatic refracted angle determination, the problem could only be solves by locking the algorithm on a representative image of the suspect material comprising an indicator. We then used a simple geometric model to quantify the deformation of the indicators on a B-scan image due to a non-flat translator/part interface. Finally, tests were carried out on measurements acquired from flat samples containing artificial and real defects so that the overall performance of the method after development could be assessed. This work has allowed the DTVG detection method to be adapted for use with B-scan images acquired with a non-zero ultrasonic beam incidence angle. Moreover, we have been able to show that for similar geometries to those of the cast bends and for deep defects the deformation of the indicators due

  8. "We are the Arctic"

    DEFF Research Database (Denmark)

    Thomsen, Robert Chr.; Ren, Carina Bregnholm; Mahadevan, Renuka

    2018-01-01

    In this article, we explore the 2016 Arctic Winter Games as a site for Arctic, Indigenous and national identity-building, drawing on fieldwork from the planning and execution of AWG 2016 and surveys conducted with participant and stakeholder groups. We show that although the AWG 2016 event is see...... positions also. In practice, competition at this sporting event extends to identity discourses competing for hegemony, but the games also create spaces for identity negotiation and willful identity entanglement....

  9. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    M. Macků

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.

  10. Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system

    Science.gov (United States)

    Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew

    2016-05-01

    Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.

  11. Study of different ultrasonic focusing methods applied to non destructive testing

    International Nuclear Information System (INIS)

    El Amrani, M.

    1995-01-01

    The work presented in this thesis concerns the study of different ultrasonic focusing techniques applied to Nondestructive Testing (mechanical focusing and electronic focusing) and compares their capabilities. We have developed a model to predict the ultrasonic field radiated into a solid by water-coupled transducers. The model is based upon the Rayleigh integral formulation, modified to take account the refraction at the liquid-solid interface. The model has been validated by numerous experiments in various configurations. Running this model and the associated software, we have developed new methods to optimize focused transducers and studied the characteristics of the beam generated by transducers using various focusing techniques. (author). 120 refs., 95 figs., 4 appends

  12. Adding randomness controlling parameters in GRASP method applied in school timetabling problem

    Directory of Open Access Journals (Sweden)

    Renato Santos Pereira

    2017-09-01

    Full Text Available This paper studies the influence of randomness controlling parameters (RCP in first stage GRASP method applied in graph coloring problem, specifically school timetabling problems in a public high school. The algorithm (with the inclusion of RCP was based on critical variables identified through focus groups, whose weights can be adjusted by the user in order to meet the institutional needs. The results of the computational experiment, with 11-year-old data (66 observations processed at the same high school show that the inclusion of RCP leads to significantly lowering the distance between initial solutions and local minima. The acceptance and the use of the solutions found allow us to conclude that the modified GRASP, as has been constructed, can make a positive contribution to this timetabling problem of the school in question.

  13. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    Macků M.

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to the production of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out what dimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mould production, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself. Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically. A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meet requirements specified by a customer were the results.

  14. Applied methods for mitigation of damage by stress corrosion in BWR type reactors

    International Nuclear Information System (INIS)

    Hernandez C, R.; Diaz S, A.; Gachuz M, M.; Arganis J, C.

    1998-01-01

    The Boiling Water nuclear Reactors (BWR) have presented stress corrosion problems, mainly in components and pipes of the primary system, provoking negative impacts in the performance of energy generator plants, as well as the increasing in the radiation exposure to personnel involucred. This problem has caused development of research programs, which are guided to find solution alternatives for the phenomena control. Among results of greater relevance the control for the reactor water chemistry stands out particularly in the impurities concentration and oxidation of radiolysis products; as well as the supervision in the materials selection and the stresses levels reduction. The present work presents the methods which can be applied to diminish the problems of stress corrosion in BWR reactors. (Author)

  15. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  16. Borehole-to-borehole geophysical methods applied to investigations of high level waste repository sites

    International Nuclear Information System (INIS)

    Ramirez, A.L.

    1983-01-01

    This discussion focuses on the use of borehole to borehole geophysical measurements to detect geological discontinuities in High Level Waste (HLW) repository sites. The need for these techniques arises from: (a) the requirement that a HLW repository's characteristics and projected performance be known with a high degree of confidence; and (b) the inadequacy of other geophysical methods in mapping fractures. Probing configurations which can be used to characterize HLW sites are described. Results from experiments in which these techniques were applied to problems similar to those expected at repository sites are briefly discussed. The use of a procedure designed to reduce uncertainty associated with all geophysical exploration techniques is proposed; key components of the procedure are defined

  17. Using the Convergent Cross Mapping method to test causality between Arctic Oscillation / North Atlantic Oscillation and Atlantic Multidecadal Oscillation

    Science.gov (United States)

    Gutowska, Dorota; Piskozub, Jacek

    2017-04-01

    There is a vast literature body on the climate indices and processes they represent. A large part of it deals with "teleconnections" or causal relations between them. However until recently time lagged correlations was the best tool of studying causation. However no correlation (even lagged) proves causation. We use a recently developed method of studying casual relations between short time series, Convergent Cross Mapping (CCM), to search for causation between the atmospheric (AO and NAO) and oceanic (AMO) indices. The version we have chosen (available as an R language package rEDM) allows for comparing time series with time lags. This work builds on previous one, showing with time-lagged correlations that AO/NAO precedes AMO by about 15 years and at the same time is preceded by AMO (but with an inverted sign) also by the same amount of time. This behaviour is identical to the relationship of a sine and cosine with the same period. This may suggest that the multidecadal oscillatory parts of the atmospheric and oceanic indices represent the same global-scale set of processes. In other words they may be symptoms of the same oscillation. The aim of present study is to test this hypothesis with a tool created specially for discovering causal relationships in dynamic systems.

  18. Applied mechanics of the Puricelli osteotomy: a linear elastic analysis with the finite element method

    Directory of Open Access Journals (Sweden)

    de Paris Marcel

    2007-11-01

    Full Text Available Abstract Background Surgical orthopedic treatment of the mandible depends on the development of techniques resulting in adequate healing processes. In a new technical and conceptual alternative recently introduced by Puricelli, osteotomy is performed in a more distal region, next to the mental foramen. The method results in an increased area of bone contact, resulting in larger sliding rates among bone segments. This work aimed to investigate the mechanical stability of the Puricelli osteotomy design. Methods Laboratory tests complied with an Applied Mechanics protocol, in which results from the Control group (without osteotomy were compared with those from Test I (Obwegeser-Dal Pont osteotomy and Test II (Puricelli osteotomy groups. Mandible edentulous prototypes were scanned using computerized tomography, and digitalized images were used to build voxel-based finite element models. A new code was developed for solving the voxel-based finite elements equations, using a reconditioned conjugate gradients iterative solver. The Magnitude of Displacement and von Mises equivalent stress fields were compared among the three groups. Results In Test Group I, maximum stress was seen in the region of the rigid internal fixation plate, with value greater than those of Test II and Control groups. In Test Group II, maximum stress was in the same region as in Control group, but was lower. The results of this comparative study using the Finite Element Analysis suggest that Puricelli osteotomy presents better mechanical stability than the original Obwegeser-Dal Pont technique. The increased area of the proximal segment and consequent decrease of the size of lever arm applied to the mandible in the modified technique yielded lower stress values, and consequently greater stability of the bone segments. Conclusion This work showed that Puricelli osteotomy of the mandible results in greater mechanical stability when compared to the original technique introduced by

  19. Infrared thermography inspection methods applied to the target elements of W7-X Divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Durocher, A.; Schlosser, J.; Farjon, J.-L.; Vignal, N.; Traxler, H.; Schedler, B.; Boscary, J.

    2006-01-01

    As heat exhaust capability and lifetime of plasma-facing component (PFC) during in-situ operation are linked to the manufacturing quality, a set of non-destructive testing must be operated during R-and-D and manufacturing phases. Within this framework, advanced non-destructive examination (NDE) methods are one of the key issues to achieve a high level of quality and reliability of joining techniques in the production of high heat flux components but also to develop and built successfully PFCs for a next generation of fusion devices. In this frame, two NDE infrared thermographic approaches, which have been recently applied to the qualification of CFC target elements of the W7-X divertor during the first series production will be discussed in this paper. The first one, developed by CEA (SATIR facility) and used with successfully to the control of the mass-produced actively cooled PFCs on Tore Supra, is based on the transient thermography where the testing protocol consists in inducing a thermal transient within the heat sink structure by an alternative hot/cold water flow. The second one, recently developed by PLANSEE (ARGUS facility), is based on the pulsed thermography where the component is heated externally by a single powerful flash of light. Results obtained on qualification experiences performed during the first series production of W7-X divertor components representing about thirty mock-ups with artificial and manufacturing defects, demonstrated the capabilities of these two methods and raised the efficiency of inspection to a level which is appropriate for industrial application. This comparative study, associated to a cross-checking analysis between the high heat flux performance tests and these inspection methods by infrared thermography, showed a good reproducibility and allowed to set a detectable limit specific at each method. Finally, the detectability of relevant defects showed excellent coincidence with thermal images obtained from high heat flux

  20. Oil Spill Trajectories from HF Radars: Applied Dynamical Systems Methods vs. a Lagrangian Stochastic Model

    Science.gov (United States)

    Emery, B. M.; Washburn, L.; Mezic, I.; Loire, S.; Arbabi, H.; Ohlmann, C.; Harlan, J.

    2016-02-01

    We apply several analysis methods to HF radar ocean surface current maps to investigate improvements in trajectory modeling. Results from a Lagrangian Stochastic Model (LSM) are compared with methods based on dynamical systems theory: hypergraphs and Koopman mode analysis. The LSM produces trajectories by integrating Eulerian fields from the HF radar, and accounts for sub-grid scale velocity variability by including a random component based on the Lagrangian decorrelation time. Hypergraphs also integrate the HF radar maps in time, showing areas of strain, strain-rotation, and mixing, by plotting the relative strengths of the eigenvalues of the gradient of the time-averaged Lagrangian velocity. Koopman mode analysis decomposes the velocity field into modes of variability, similarly to EOF or a Fourier analysis, though each Koopman mode varies in time with a distinct frequency. Each method simulates oil drift from a the oil spill of May, 2015 that occurred within the coverage area of the HF radars, in the Santa Barbara Channel near Refugio Beach, CA. Preliminary results indicate some skill in determining the transport of oil when compare to publicly available observations of oil in the Santa Barbara Channel. These simulations have not shown a connection between the Refugio spill site and oil observations in the Santa Monica Bay, near Los Angeles CA, though accumulation zones shown by the hypergraphs correlate in time and space with these observations. Improvements in the HF radar coverage and accuracy were observed during the spill by the deployment of an additional HF radar site near Gaviota, CA. Presently we are collecting observations of oil on beaches and in the ocean, determining the role of winds in the oil movement, and refining the methods. Some HF radar data is being post-processed to incorporate recent antenna calibrations for sites in Santa Monica Bay. We will evaluate effects of the newly processed data on analysis results.

  1. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    Energy Technology Data Exchange (ETDEWEB)

    Bhagwat, Nikhil V. [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  2. Design and fabrication of facial prostheses for cancer patient applying computer aided method and manufacturing (CADCAM)

    Science.gov (United States)

    Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana

    2016-12-01

    Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.

  3. Goal oriented soil mapping: applying modern methods supported by local knowledge: A review

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr

    2017-04-01

    In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006

  4. [An experimental assessment of methods for applying intestinal sutures in intestinal obstruction].

    Science.gov (United States)

    Akhmadudinov, M G

    1992-04-01

    The results of various methods used in applying intestinal sutures in obturation were studied. Three series of experiments were conducted on 30 dogs--resection of the intestine after obstruction with the formation of anastomoses by means of double-row suture (Albert--Shmiden--Lambert) in the first series (10 dogs), by a single-row suture after V. M. Mateshchuk [correction of Mateshuku] in the second series, and bu a single-row stretching suture suggested by the author in the third series. The postoperative complications and the parameters of physical airtightness of the intestinal anastomosis were studied in dynamics in the experimental animals. The results of the study: incompetence of the anastomosis sutures in the first series 6, in the second 4, and in the third series one. Adhesions occurred in all animals of the first and second series and in 2 of the third series. Six dogs of the first series died, 4 of the second, and one of the third. Study of the dynamics of the results showed a direct connection of the complications with the parameters of the physical airtightness of the anastomosis, and the last-named with the method of the intestinal suture. Relatively better results were noted in formation of the anastomosis by means of our suggested stretshing continuous suture passed through the serous, muscular, and submucous coats of the intestine.

  5. Contemporary Arctic Sea Level

    Science.gov (United States)

    Cazenave, A. A.

    2017-12-01

    During recent decades, the Arctic region has warmed at a rate about twice the rest of the globe. Sea ice melting is increasing and the Greenland ice sheet is losing mass at an accelerated rate. Arctic warming, decrease in the sea ice cover and fresh water input to the Arctic ocean may eventually impact the Arctic sea level. In this presentation, we review our current knowledge of contemporary Arctic sea level changes. Until the beginning of the 1990s, Arctic sea level variations were essentially deduced from tide gauges located along the Russian and Norwegian coastlines. Since then, high inclination satellite altimetry missions have allowed measuring sea level over a large portion of the Arctic Ocean (up to 80 degree north). Measuring sea level in the Arctic by satellite altimetry is challenging because the presence of sea ice cover limits the full capacity of this technique. However adapted processing of raw altimetric measurements significantly increases the number of valid data, hence the data coverage, from which regional sea level variations can be extracted. Over the altimetry era, positive trend patterns are observed over the Beaufort Gyre and along the east coast of Greenland, while negative trends are reported along the Siberian shelf. On average over the Arctic region covered by satellite altimetry, the rate of sea level rise since 1992 is slightly less than the global mea sea level rate (of about 3 mm per year). On the other hand, the interannual variability is quite significant. Space gravimetry data from the GRACE mission and ocean reanalyses provide information on the mass and steric contributions to sea level, hence on the sea level budget. Budget studies show that regional sea level trends over the Beaufort Gyre and along the eastern coast of Greenland, are essentially due to salinity changes. However, in terms of regional average, the net steric component contributes little to the observed sea level trend. The sea level budget in the Arctic

  6. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  7. Comparison of virological methods applied on african swine fever diagnosis in Brazil, 1978

    Directory of Open Access Journals (Sweden)

    Tânia Rosária Pereira Freitas

    2015-10-01

    Full Text Available ABSTRACT. Freitas T.R.P., Souza A.C., Esteves E.G. & Lyra T.M.P. [Comparison of virological methods applied on african swine fever diagnosis in Brazil, 1978.] Comparação dos métodos virológicos aplicados no diagnóstico da peste suína africana no Brasil, 1978. Revista Brasileira de Medicina Veterinária, 37(3:255-263, 2015. Laboratório Nacional Agropecuário, Ministério da Agricultura, Pecuária e Abastecimento, Avenida Rômulo Joviano, s/n, Caixa postal 35/50, Pedro Leopoldo, MG 33600-000, Brasil. taniafrei@hotmail.com The techniques of leucocytes haemadsorption (HAD for the African Swine Fever (ASF virus isolation and the fluorescent antigens tissue samples (FATS for virus antigens detection were implanted in the ASF eradication campaign in the country. The complementary of techniques was studied considering the results obtained when the HAD and FATS were concomitantly applied on the same pig tissue samples. The results of 22, 56 and 30 pigs samples from of the States of Rio de Janeiro (RJ, São Paulo (SP and Paraná (PR, respectively, showed that in RJ 11 (50%; in SP, 28 (50% and in PR, 15 (50% samples were positive in the HAD, while, RJ, 18 (82%; SP, 33 (58% and PR, 17 (57% were positive in the FATS. In the universe of 108 samples submitted to both the tests, 83 (76.85% were positive in at least one of the tests, which characterized ASF positivity. Among the positive samples, 28 (34% have presented HAD negative results and 15 (18% have presented FATS negative results. The achievement of applying simultaneously the both tests was the reduction of false- negative results, conferring more ASF accurate laboratorial diagnosis, besides to show the tests complementary. This aspect is fundamentally importance concern with a disease eradiation program to must avoid false negative results. Evidences of low virulence ASFV strains in Brazilian ASF outbreaks and also the distribution of ASF outbreaks by the mesoregions of each State were discussed

  8. A mixed methods evaluation of team-based learning for applied pathophysiology in undergraduate nursing education.

    Science.gov (United States)

    Branney, Jonathan; Priego-Hernández, Jacqueline

    2018-02-01

    It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those

  9. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    Science.gov (United States)

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Can we constrain postglacial sedimentation in the western Arctic Ocean by ramped pyrolysis 14C? A case study from the Chukchi-Alaskan margin.

    Science.gov (United States)

    Suzuki, K.; Yamamoto, M.; Rosenheim, B. E.; Omori, T.; Polyak, L.; Nam, S. I.

    2017-12-01

    The Arctic Ocean underwent dramatic climate changes in the past. Variations in sea-ice extent and ocean current system in the Arctic cause changes in surface albedo and deep water formation, which have global climatic implications. However, Arctic paleoceanographic studies are lagging behind the other oceans due largely to chronostratigraphic difficulties. One of the reasons for this is a scant presence of material suitable for 14C dating in large areas of the Arctic seafloor. To enable improved age constraints for sediments impoverished in datable material, we apply ramped pyrolysis 14C method (Ramped PyrOx 14C, Rosenheim et al., 2008) to sedimentary records from the Chukchi-Alaska margin recovering Holocene to late-glacial deposits. Samples were divided into five fraction products by gradual heating sedimentary organic carbon from ambient laboratory temperature to 1000°C. The thermographs show a trimodal pattern of organic matter decomposition over temperature, and we consider that CO2 generated at the lowest temperature range was derived from autochthonous organic carbon contemporaneous with sediment deposition, similar to studies in the Antarctic margin and elsewhere. For verification of results, some of the samples treated for ramped pyrolysis 14C were taken from intervals dated earlier by AMS 14C using bivalve mollusks. Ultimately, our results allow a new appraisal of deglacial to Holocene deposition at the Chukchi-Alaska margin with potential to be applied to other regions of the Arctic Ocean.

  11. Collaborative Research: Improving Decadal Prediction of Arctic Climate Variability and Change Using a Regional Arctic

    Energy Technology Data Exchange (ETDEWEB)

    Gutowski, William J. [Iowa State Univ., Ames, IA (United States)

    2017-12-28

    This project developed and applied a regional Arctic System model for enhanced decadal predictions. It built on successful research by four of the current PIs with support from the DOE Climate Change Prediction Program, which has resulted in the development of a fully coupled Regional Arctic Climate Model (RACM) consisting of atmosphere, land-hydrology, ocean and sea ice components. An expanded RACM, a Regional Arctic System Model (RASM), has been set up to include ice sheets, ice caps, mountain glaciers, and dynamic vegetation to allow investigation of coupled physical processes responsible for decadal-scale climate change and variability in the Arctic. RASM can have high spatial resolution (~4-20 times higher than currently practical in global models) to advance modeling of critical processes and determine the need for their explicit representation in Global Earth System Models (GESMs). The pan-Arctic region is a key indicator of the state of global climate through polar amplification. However, a system-level understanding of critical arctic processes and feedbacks needs further development. Rapid climate change has occurred in a number of Arctic System components during the past few decades, including retreat of the perennial sea ice cover, increased surface melting of the Greenland ice sheet, acceleration and thinning of outlet glaciers, reduced snow cover, thawing permafrost, and shifts in vegetation. Such changes could have significant ramifications for global sea level, the ocean thermohaline circulation and heat budget, ecosystems, native communities, natural resource exploration, and commercial transportation. The overarching goal of the RASM project has been to advance understanding of past and present states of arctic climate and to improve seasonal to decadal predictions. To do this the project has focused on variability and long-term change of energy and freshwater flows through the arctic climate system. The three foci of this research are: - Changes

  12. Satellite Observations of Arctic Change

    Data.gov (United States)

    National Aeronautics and Space Administration — The purpose of this site is to expose NASA satellite data and research on Arctic change in the form of maps that illustrate the changes taking place in the Arctic...

  13. Arctic Rabies – A Review

    Directory of Open Access Journals (Sweden)

    Prestrud Pål

    2004-03-01

    Full Text Available Rabies seems to persist throughout most arctic regions, and the northern parts of Norway, Sweden and Finland, is the only part of the Arctic where rabies has not been diagnosed in recent time. The arctic fox is the main host, and the same arctic virus variant seems to infect the arctic fox throughout the range of this species. The epidemiology of rabies seems to have certain common characteristics in arctic regions, but main questions such as the maintenance and spread of the disease remains largely unknown. The virus has spread and initiated new epidemics also in other species such as the red fox and the racoon dog. Large land areas and cold climate complicate the control of the disease, but experimental oral vaccination of arctic foxes has been successful. This article summarises the current knowledge and the typical characteristics of arctic rabies including its distribution and epidemiology.

  14. Arctic Political Discourses: Constructing a Russian National Identity

    OpenAIRE

    Karlsen, Kristofer

    2016-01-01

    This research explores how Russian national identity is constructed through political discourses pertaining to the Arctic. Theoretically this thesis addresses how national identity is constructed through these discourses and subsequently how this identity is used to justify Russia’s Arctic policy to a domestic as well as an international audience. In order to achieve this a hybrid methodology combining critical discourse analysis and political discourse analysis was applied to two forms of po...

  15. Arctic dimension of global warming

    OpenAIRE

    G. V. Alekseev

    2014-01-01

    A brief assessment of the global warming in the Arctic climate system with the emphasis on sea ice is presented. The Arctic region is coupled to the global climate system by the atmosphere and ocean circulation that providesa major contribution to the Arctic energy budget. On this basis using of special indices it is shown that amplification of warming in the Arctic is associated with the increasing of meridional heat transport from the low latitudes.

  16. Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations

    International Nuclear Information System (INIS)

    Arimescu, V.E.; Heins, L.

    2001-01-01

    method, which is computationally efficient, is presented for the evaluation of the global statement. It is proved that, r, the expected fraction of fuel rods exceeding a certain limit is equal to the (1-r)-quantile of the overall distribution of all possible values from all fuel rods. In this way, the problem is reduced to that of estimating a certain quantile of the overall distribution, and the same techniques used for a single rod distribution can be applied again. A simplified test case was devised to verify and validate the methodology. The fuel code was replaced by a transfer function dependent on two input parameters. The function was chosen so that analytic results could be obtained for the distribution of the output. This offers a direct validation for the statistical procedure. Also, a sensitivity study has been performed to analyze the effect on the final outcome of the sampling procedure, simple Monte Carlo and Latin Hypercube Sampling. Also, the effect on the accuracy and bias of the statistical results due to the size of the sample was studied and the conclusion was reached that the results of the statistical methodology are typically conservative. In the end, an example of applying these statistical techniques to a PWR reload is presented together with the improvements and new insights the statistical methodology brings to fuel rod design calculations. (author)

  17. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    Directory of Open Access Journals (Sweden)

    D.D. Lestiani

    2011-08-01

    Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.

  18. Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-08-01

    Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.

  19. The Global Survey Method Applied to Ground-level Cosmic Ray Measurements

    Science.gov (United States)

    Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.

    2018-04-01

    The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.

  20. Strategic War Game - Arctic Response

    Science.gov (United States)

    2010-11-01

    Arctic Game Theory Strategic Analysis War Game ... Strategic War Game – Arctic Response A. P. Billyard I. A. Collin H. A. Hrychuk Canadian Forces Aerospace Warfare Center Operational...Operational Research Strategic War Game – Arctic Response A. P. Billyard I. A. Collin H. A. Hrychuk Canadian Forces Aerospace

  1. Multicriterial Hierarchy Methods Applied in Consumption Demand Analysis. The Case of Romania

    Directory of Open Access Journals (Sweden)

    Constantin Bob

    2008-03-01

    Full Text Available The basic information for computing the quantitative statistical indicators, that characterize the demand of industrial products and services are collected by the national statistics organizations, through a series of statistical surveys (most of them periodical and partial. The source for data we used in the present paper is an statistical investigation organized by the National Institute of Statistics, "Family budgets survey" that allows to collect information regarding the households composition, income, expenditure, consumption and other aspects of population living standard. In 2005, in Romania, a person spent monthly in average 391,2 RON, meaning about 115,1 Euros for purchasing the consumed food products and beverage, as well as non-foods products, services, investments and other taxes. 23% of this sum was spent for purchasing the consumed food products and beverages, 21.6% of the total sum was spent for purchasing non-food goods and 18,1%  for payment of different services. There is a discrepancy between the different development regions in Romania, regarding total households expenditure composition. For this reason, in the present paper we applied statistical methods for ranking the various development regions in Romania, using the share of householdsí expenditure on categories of products and services as ranking criteria.

  2. Applying the Communicative Methodic in Learning Lithuanian as a Second Language

    Directory of Open Access Journals (Sweden)

    Vaida Buivydienė

    2011-04-01

    Full Text Available One of the strengths of European countries is their multilingual nature. That was stressed by the European Council during different international projects. Every citizen of Europe should be given the opportunity to learn languages life long, as languages open new perspectives in the modern world. Besides, learning languages brings tolerance and understanding to people from different cultures. The article presents the idea, based on the experience of foreign language teaching, that communicative method in learning languages should be applied also to Lithuanian as a foreign language teaching. According to international SOCRATES exchange programme, every year a lot of students and teachers from abroad come to Lithuanian Higher Schools (VGTU included. They should also be provided with opportunities to gain the best language learning, cultural and educational experience. Most of the students that came to VGTU pointed out Lithuanian language learning being one of the subjects to be chosen. That leads to organizing interesting and useful short-lasting Lithuanian language courses. The survey carried in VGTU and the analysis of the materials gathered leads to the conclusion that the communicative approach in language teaching is the best to cater the needs and interests of the learners to master the survival Lithuanian.

  3. A new method of identifying target groups for pronatalist policy applied to Australia.

    Directory of Open Access Journals (Sweden)

    Mengni Chen

    Full Text Available A country's total fertility rate (TFR depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.

  4. A new method of identifying target groups for pronatalist policy applied to Australia.

    Science.gov (United States)

    Chen, Mengni; Lloyd, Chris J; Yip, Paul S F

    2018-01-01

    A country's total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.

  5. Arctic security and Norway

    Energy Technology Data Exchange (ETDEWEB)

    Tamnes, Rolf

    2013-03-01

    Global warming is one of the most serious threats facing mankind. Many regions and countries will be affected, and there will be many losers. The earliest and most intense climatic changes are being experienced in the Arctic region. Arctic average temperature has risen at twice the rate of the global average in the past half century. These changes provide an early indication for the world of the environmental and societal significance of global warming. For that reason, the Arctic presents itself as an important scientific laboratory for improving our understanding of the causes and patterns of climate changes. The rapidly rising temperature threatens the Arctic ecosystem, but the human consequences seem to be far less dramatic there than in many other places in the world. According to the U.S. National Intelligence Council, Russia has the potential to gain the most from increasingly temperate weather, because its petroleum reserves become more accessible and because the opening of an Arctic waterway could provide economic and commercial advantages. Norway might also be fortunate. Some years ago, the Financial Times asked: #Left Double Quotation Mark#What should Norway do about the fact that global warming will make their climate more hospitable and enhance their financial situation, even as it inflicts damage on other parts of the world?#Right Double Quotation Mark#(Author)

  6. Conflict Resolution Practices of Arctic Aboriginal Peoples

    NARCIS (Netherlands)

    Gendron, R.; Hille, C.

    2013-01-01

    This article presents an overview of the conflict resolution practices of indigenous populations in the Arctic. Among the aboriginal groups discussed are the Inuit, the Aleut, and the Saami. Having presented the conflict resolution methods, the authors discuss the types of conflicts that are

  7. Beyond Thin Ice: Co-Communicating the Many Arctics

    Science.gov (United States)

    Druckenmiller, M. L.; Francis, J. A.; Huntington, H.

    2015-12-01

    Science communication, typically defined as informing non-expert communities of societally relevant science, is persuaded by the magnitude and pace of scientific discoveries, as well as the urgency of societal issues wherein science may inform decisions. Perhaps nowhere is the connection between these facets stronger than in the marine and coastal Arctic where environmental change is driving advancements in our understanding of natural and socio-ecological systems while paving the way for a new assortment of arctic stakeholders, who generally lack adequate operational knowledge. As such, the Arctic provides opportunity to advance the role of science communication into a collaborative process of engagement and co-communication. To date, the communication of arctic change falls within four primary genres, each with particular audiences in mind. The New Arctic communicates an arctic of new stakeholders scampering to take advantage of unprecedented access. The Global Arctic conveys the Arctic's importance to the rest of the world, primarily as a regulator of lower-latitude climate and weather. The Intra-connected Arctic emphasizes the increasing awareness of the interplay between system components, such as between sea ice loss and marine food webs. The Transforming Arctic communicates the region's trajectory relative to the historical Arctic, acknowledging the impacts on indigenous peoples. The broad societal consensus on climate change in the Arctic as compared to other regions in the world underscores the opportunity for co-communication. Seizing this opportunity requires the science community's engagement with stakeholders and indigenous peoples to construct environmental change narratives that are meaningful to climate responses relative to non-ecological priorities (e.g., infrastructure, food availability, employment, or language). Co-communication fosters opportunities for new methods of and audiences for communication, the co-production of new interdisciplinary

  8. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    Science.gov (United States)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  9. The Arctic as a test case for an assessment of climate impacts on national security.

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Mark A.; Zak, Bernard Daniel; Backus, George A.; Ivey, Mark D.; Boslough, Mark Bruce Elrick

    2008-11-01

    The Arctic region is rapidly changing in a way that will affect the rest of the world. Parts of Alaska, western Canada, and Siberia are currently warming at twice the global rate. This warming trend is accelerating permafrost deterioration, coastal erosion, snow and ice loss, and other changes that are a direct consequence of climate change. Climatologists have long understood that changes in the Arctic would be faster and more intense than elsewhere on the planet, but the degree and speed of the changes were underestimated compared to recent observations. Policy makers have not yet had time to examine the latest evidence or appreciate the nature of the consequences. Thus, the abruptness and severity of an unfolding Arctic climate crisis has not been incorporated into long-range planning. The purpose of this report is to briefly review the physical basis for global climate change and Arctic amplification, summarize the ongoing observations, discuss the potential consequences, explain the need for an objective risk assessment, develop scenarios for future change, review existing modeling capabilities and the need for better regional models, and finally to make recommendations for Sandia's future role in preparing our leaders to deal with impacts of Arctic climate change on national security. Accurate and credible regional-scale climate models are still several years in the future, and those models are essential for estimating climate impacts around the globe. This study demonstrates how a scenario-based method may be used to give insights into climate impacts on a regional scale and possible mitigation. Because of our experience in the Arctic and widespread recognition of the Arctic's importance in the Earth climate system we chose the Arctic as a test case for an assessment of climate impacts on national security. Sandia can make a swift and significant contribution by applying modeling and simulation tools with internal collaborations as well as with

  10. Modern structure of methods and techniques of marketing research, applied by the world and Ukrainian research companies

    Directory of Open Access Journals (Sweden)

    Bezkrovnaya Yulia

    2015-08-01

    Full Text Available The article presents the results of empiric justification of the structure of methods and techniques of marketing research of consumer decisions, applied by the world and Ukrainian research companies.

  11. Application of Visible/near Infrared derivative spectroscopy to Arctic paleoceanography

    International Nuclear Information System (INIS)

    Ortiz, Joseph D

    2011-01-01

    The lack of well-preserved carbonate in much of the Arctic marine environment dictates the need for alternative methods of paleoceanographic reconstruction. The broad variety of physical properties measurements makes them well suited for use in a variety of environments, but they provide unique opportunities when employed in the Arctic. Because Arctic sediment is introduced and reworked by a variety of mechanisms, the signature from multiple processes becomes intermixed with the sediment. Many of these processes operate in other ocean basins, while some function only in Polar Regions. A strategy to address this mixing problem is to employ spectrally-resolved physical properties measurements, or to use multiple methods in conjunction to generate multivariate data sets, which can differentiate concurrent processes. Data of this type is well suited to multivariate analysis techniques such as sample-based or variable-based, varimax-rotated, principle component analysis (VPCA). These are methods that decompose the data matrix to infer process from orthogonal functions. The method is applied to cores from the Chukchi sea to document that visible derivative spectroscopy provides a powerful means of reconstructing sediment provenance. In the Chukchi Sea, diffuse spectral reflectance provides a proxy to monitor variations in Holocene flow through the Bering Strait.

  12. Application of Visible/near Infrared derivative spectroscopy to Arctic paleoceanography

    Science.gov (United States)

    Ortiz, Joseph D.

    2011-05-01

    The lack of well-preserved carbonate in much of the Arctic marine environment dictates the need for alternative methods of paleoceanographic reconstruction. The broad variety of physical properties measurements makes them well suited for use in a variety of environments, but they provide unique opportunities when employed in the Arctic. Because Arctic sediment is introduced and reworked by a variety of mechanisms, the signature from multiple processes becomes intermixed with the sediment. Many of these processes operate in other ocean basins, while some function only in Polar Regions. A strategy to address this mixing problem is to employ spectrally-resolved physical properties measurements, or to use multiple methods in conjunction to generate multivariate data sets, which can differentiate concurrent processes. Data of this type is well suited to multivariate analysis techniques such as sample-based or variable-based, varimax-rotated, principle component analysis (VPCA). These are methods that decompose the data matrix to infer process from orthogonal functions. The method is applied to cores from the Chukchi sea to document that visible derivative spectroscopy provides a powerful means of reconstructing sediment provenance. In the Chukchi Sea, diffuse spectral reflectance provides a proxy to monitor variations in Holocene flow through the Bering Strait.

  13. Application of Visible/near Infrared derivative spectroscopy to Arctic paleoceanography

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz, Joseph D, E-mail: jortiz@kent.edu [Kent State University, Department of Geology, Kent, OH 44242 (United States)

    2011-05-15

    The lack of well-preserved carbonate in much of the Arctic marine environment dictates the need for alternative methods of paleoceanographic reconstruction. The broad variety of physical properties measurements makes them well suited for use in a variety of environments, but they provide unique opportunities when employed in the Arctic. Because Arctic sediment is introduced and reworked by a variety of mechanisms, the signature from multiple processes becomes intermixed with the sediment. Many of these processes operate in other ocean basins, while some function only in Polar Regions. A strategy to address this mixing problem is to employ spectrally-resolved physical properties measurements, or to use multiple methods in conjunction to generate multivariate data sets, which can differentiate concurrent processes. Data of this type is well suited to multivariate analysis techniques such as sample-based or variable-based, varimax-rotated, principle component analysis (VPCA). These are methods that decompose the data matrix to infer process from orthogonal functions. The method is applied to cores from the Chukchi sea to document that visible derivative spectroscopy provides a powerful means of reconstructing sediment provenance. In the Chukchi Sea, diffuse spectral reflectance provides a proxy to monitor variations in Holocene flow through the Bering Strait.

  14. Changing Arctic ecosystems--research to understand and project changes in marine and terrestrial ecosystems of the Arctic

    Science.gov (United States)

    Geiselman, Joy; DeGange, Anthony R.; Oakley, Karen; Derksen, Dirk; Whalen, Mary

    2012-01-01

    Ecosystems and their wildlife communities are not static; they change and evolve over time due to numerous intrinsic and extrinsic factors. A period of rapid change is occurring in the Arctic for which our current understanding of potential ecosystem and wildlife responses is limited. Changes to the physical environment include warming temperatures, diminishing sea ice, increasing coastal erosion, deteriorating permafrost, and changing water regimes. These changes influence biological communities and the ways in which human communities interact with them. Through the new initiative Changing Arctic Ecosystems (CAE) the U.S. Geological Survey (USGS) strives to (1) understand the potential suite of wildlife population responses to these physical changes to inform key resource management decisions such as those related to the Endangered Species Act, and (2) provide unique insights into how Arctic ecosystems are responding under new stressors. Our studies examine how and why changes in the ice-dominated ecosystems of the Arctic are affecting wildlife and will provide a better foundation for understanding the degree and manner in which wildlife species respond and adapt to rapid environmental change. Changes to Arctic ecosystems will be felt broadly because the Arctic is a production zone for hundreds of species that migrate south for the winter. The CAE initiative includes three major research themes that span Arctic ice-dominated ecosystems and that are structured to identify and understand the linkages between physical processes, ecosystems, and wildlife populations. The USGS is applying knowledge-based modeling structures such as Bayesian Networks to integrate the work.

  15. Analysis of flow boiling heat transfer in narrow annular gaps applying the design of experiments method

    Directory of Open Access Journals (Sweden)

    Gunar Boye

    2015-06-01

    Full Text Available The axial heat transfer coefficient during flow boiling of n-hexane was measured using infrared thermography to determine the axial wall temperature in three geometrically similar annular gaps with different widths (s = 1.5 mm, s = 1 mm, s = 0.5 mm. During the design and evaluation process, the methods of statistical experimental design were applied. The following factors/parameters were varied: the heat flux q · = 30 − 190 kW / m 2 , the mass flux m · = 30 − 700 kg / m 2 s , the vapor quality x · = 0 . 2 − 0 . 7 , and the subcooled inlet temperature T U = 20 − 60 K . The test sections with gap widths of s = 1.5 mm and s = 1 mm had very similar heat transfer characteristics. The heat transfer coefficient increases significantly in the range of subcooled boiling, and after reaching a maximum at the transition to the saturated flow boiling, it drops almost monotonically with increasing vapor quality. With a gap width of 0.5 mm, however, the heat transfer coefficient in the range of saturated flow boiling first has a downward trend and then increases at higher vapor qualities. For each test section, two correlations between the heat transfer coefficient and the operating parameters have been created. The comparison also shows a clear trend of an increasing heat transfer coefficient with increasing heat flux for test sections s = 1.5 mm and s = 1.0 mm, but with increasing vapor quality, this trend is reversed for test section 0.5 mm.

  16. A new method of identifying target groups for pronatalist policy applied to Australia

    Science.gov (United States)

    Chen, Mengni; Lloyd, Chris J.

    2018-01-01

    A country’s total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup’s potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies. PMID:29425220

  17. Globalising the Arctic Climate:

    DEFF Research Database (Denmark)

    Corry, Olaf

    2017-01-01

    targets of political operations and contestations—are not simple ‘issues’ or ‘problems’ given to actors to deal with. Governance-objects emerge and are constructed through science, technology and politics, and rather than slotting neatly into existing structures, they have their own structuring effects...... on world politics. The emergence of the Arctic climate as a potential target of governance provides a case in point. The Arctic climate is becoming globalised, pushing it up the political agenda but drawing it away from its local and regional context....

  18. Continental Margins of the Arctic Ocean: Implications for Law of the Sea

    Science.gov (United States)

    Mosher, David

    2016-04-01

    A coastal State must define the outer edge of its continental margin in order to be entitled to extend the outer limits of its continental shelf beyond 200 M, according to article 76 of the UN Convention on the Law of the Sea. The article prescribes the methods with which to make this definition and includes such metrics as water depth, seafloor gradient and thickness of sediment. Note the distinction between the "outer edge of the continental margin", which is the extent of the margin after application of the formula of article 76, and the "outer limit of the continental shelf", which is the limit after constraint criteria of article 76 are applied. For a relatively small ocean basin, the Arctic Ocean reveals a plethora of continental margin types reflecting both its complex tectonic origins and its diverse sedimentation history. These factors play important roles in determining the extended continental shelves of Arctic coastal States. This study highlights the critical factors that might determine the outer edge of continental margins in the Arctic Ocean as prescribed by article 76. Norway is the only Arctic coastal State that has had recommendations rendered by the Commission on the Limits of the Continental Shelf (CLCS). Russia and Denmark (Greenland) have made submissions to the CLCS to support their extended continental shelves in the Arctic and are awaiting recommendations. Canada has yet to make its submission and the US has not yet ratified the Convention. The various criteria that each coastal State has utilized or potentially can utilize to determine the outer edge of the continental margin are considered. Important criteria in the Arctic include, 1) morphological continuity of undersea features, such as the various ridges and spurs, with the landmass, 2) the tectonic origins and geologic affinities with the adjacent land masses of the margins and various ridges, 3) sedimentary processes, particularly along continental slopes, and 4) thickness and

  19. Human-induced Arctic moistening.

    Science.gov (United States)

    Min, Seung-Ki; Zhang, Xuebin; Zwiers, Francis

    2008-04-25

    The Arctic and northern subpolar regions are critical for climate change. Ice-albedo feedback amplifies warming in the Arctic, and fluctuations of regional fresh water inflow to the Arctic Ocean modulate the deep ocean circulation and thus exert a strong global influence. By comparing observations to simulations from 22 coupled climate models, we find influence from anthropogenic greenhouse gases and sulfate aerosols in the space-time pattern of precipitation change over high-latitude land areas north of 55 degrees N during the second half of the 20th century. The human-induced Arctic moistening is consistent with observed increases in Arctic river discharge and freshening of Arctic water masses. This result provides new evidence that human activity has contributed to Arctic hydrological change.

  20. Arctic Terrestrial Biodiversity Monitoring Plan

    DEFF Research Database (Denmark)

    Christensen, Tom; Payne, J.; Doyle, M.

    The Conservation of Arctic Flora and Fauna (CAFF), the biodiversity working group of the Arctic Council, established the Circumpolar Biodiversity Monitoring Program (CBMP) to address the need for coordinated and standardized monitoring of Arctic environments. The CBMP includes an international...... on developing and implementing long-term plans for monitoring the integrity of Arctic biomes: terrestrial, marine, freshwater, and coastal (under development) environments. The CBMP Terrestrial Expert Monitoring Group (CBMP-TEMG) has developed the Arctic Terrestrial Biodiversity Monitoring Plan (CBMP......-Terrestrial Plan/the Plan) as the framework for coordinated, long-term Arctic terrestrial biodiversity monitoring. The goal of the CBMP-Terrestrial Plan is to improve the collective ability of Arctic traditional knowledge (TK) holders, northern communities, and scientists to detect, understand and report on long...

  1. Non-destructive scanning for applied stress by the continuous magnetic Barkhausen noise method

    Science.gov (United States)

    Franco Grijalba, Freddy A.; Padovese, L. R.

    2018-01-01

    This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect applied stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, applied magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect applied stress profiles.

  2. Krylov Subspace and Multigrid Methods Applied to the Incompressible Navier-Stokes Equations

    Science.gov (United States)

    Vuik, C.; Wesseling, P.; Zeng, S.

    1996-01-01

    We consider numerical solution methods for the incompressible Navier-Stokes equations discretized by a finite volume method on staggered grids in general coordinates. We use Krylov subspace and multigrid methods as well as their combinations. Numerical experiments are carried out on a scalar and a vector computer. Robustness and efficiency of these methods are studied. It appears that good methods result from suitable combinations of GCR and multigrid methods.

  3. Building resilience and adaptation to manage Arctic change.

    Science.gov (United States)

    Chapin, F Stuart; Hoel, Michael; Carpenter, Steven R; Lubchenco, Jane; Walker, Brian; Callaghan, Terry V; Folke, Carl; Levin, Simon A; Mäler, Karl-Göran; Nilsson, Christer; Barrett, Scott; Berkes, Fikret; Crépin, Anne-Sophie; Danell, Kjell; Rosswall, Thomas; Starrett, David; Xepapadeas, Anastasios; Zimov, Sergey A

    2006-06-01

    Unprecedented global changes caused by human actions challenge society's ability to sustain the desirable features of our planet. This requires proactive management of change to foster both resilience (sustaining those attributes that are important to society in the face of change) and adaptation (developing new socioecological configurations that function effectively under new conditions). The Arctic may be one of the last remaining opportunities to plan for change in a spatially extensive region where many of the ancestral ecological and social processes and feedbacks are still intact. If the feasibility of this strategy can be demonstrated in the Arctic, our improved understanding of the dynamics of change can be applied to regions with greater human modification. Conditions may now be ideal to implement policies to manage Arctic change because recent studies provide the essential scientific understanding, appropriate international institutions are in place, and Arctic nations have the wealth to institute necessary changes, if they choose to do so.

  4. Arctic Tides from GPS on sea-ice

    DEFF Research Database (Denmark)

    Kildegaard Rose, Stine; Skourup, Henriette; Forsberg, René

    2013-01-01

    The presence of sea-ice in the Arctic Ocean plays a significant role in the Arctic climate. Sea-ice dampens the ocean tide amplitude with the result that global tidal models perform less accurately in the polar regions. This paper presents, a kinematic processing of global positioning system (GPS......) placed on sea-ice, at six different sites north of Greenland for the preliminary study of sea surface height (SSH), and tidal analysis to improve tide models in the Central Arctic. The GPS measurements are compared with the Arctic tide model AOTIM-5, which assimilates tide-gauges and altimetry data....... The results show coherence between the GPS buoy measurements, and the tide model. Furthermore, we have proved that the reference ellipsoid of WGS84, can be interpolated to the tidal defined zero level by applying geophysical corrections to the GPS data....

  5. Research on applying neutron transport Monte Carlo method in materials with continuously varying cross sections

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Zhang, Xisi

    2011-01-01

    In traditional Monte Carlo method, the material properties in a certain cell are assumed to be constant, but this is no longer applicable in continuous varying materials where the material's nuclear cross-sections vary over the particle's flight path. So, three Monte Carlo methods, including sub stepping method, delta-tracking method and direct sampling method, are discussed in this paper to solve the problems with continuously varying materials. After the verification and comparison of these methods in 1-D models, the basic specialties of these methods are discussed and then we choose the delta-tracking method as the main method to solve the problems with continuously varying materials, especially 3-D problems. To overcome the drawbacks of the original delta-tracking method, an improved delta-tracking method is proposed in this paper to make this method more efficient in solving problems where the material's cross-sections vary sharply over the particle's flight path. To use this method in practical calculation, we implemented the improved delta-tracking method into the 3-D Monte Carlo code RMC developed by Department of Engineering Physics, Tsinghua University. Two problems based on Godiva system were constructed and calculations were made using both improved delta-tracking method and the sub stepping method, and the results proved the effects of improved delta-tracking method. (author)

  6. The Density-Enthalpy Method Applied to Model Two–phase Darcy Flow

    NARCIS (Netherlands)

    Ibrahim, D.

    2012-01-01

    In this thesis, we use a more recent method to numerically solve two-phase fluid flow problems. The method is developed at TNO and it is presented by Arendsen et al. in [1] for spatially homogeneous systems. We will refer to this method as the densityenthalpy method (DEM) because the

  7. Arctic Islands LNG

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, W.

    1977-01-01

    Trans-Canada Pipe Lines Ltd. made a feasibility study of transporting LNG from the High Arctic Islands to a St. Lawrence River Terminal by means of a specially designed and built 125,000 cu m or 165,000 cu m icebreaking LNG tanker. Studies were made of the climatology and of ice conditions, using available statistical data as well as direct surveys in 1974, 1975, and 1976. For on-schedule and unimpeded (unescorted) passage of the LNG carriers at all times of the year, special navigation and communications systems can be made available. Available icebreaking experience, charting for the proposed tanker routes, and tide tables for the Canadian Arctic were surveyed. Preliminary design of a proposed Arctic LNG icebreaker tanker, including containment system, reliquefaction of boiloff, speed, power, number of trips for 345 day/yr operation, and liquefaction and regasification facilities are discussed. The use of a minimum of three Arctic Class 10 ships would enable delivery of volumes of natural gas averaging 11.3 million cu m/day over a period of a year to Canadian markets. The concept appears to be technically feasible with existing basic technology.

  8. Arctic Craft Demonstration Report

    Science.gov (United States)

    2012-11-01

    21 Figure 15. Crowley ACV on a beach in Gwydyr Bay. .................................................................................. 22...Figure 16. Quonset hut structure allows year round operations for ACV . .................................................... 22 Figure 17. Dalton...met with Crowley Maritime Services which operates the Arctic Hawk Air Cushion Vehicle ( ACV ). This vessel is used to provide logistical support to

  9. Arctic offshore engineering

    National Research Council Canada - National Science Library

    Palmer, Andrew; Croasdale, Ken

    2013-01-01

    ... so safely, economically and with minimal risk to the environment. Singapore may at first seem a surprising place to be writing such a book, but in fact we have a significant and growing interest in the Arctic, from several directions, among them shipping and petroleum production. At Keppel we are already active in more than one of those fields, and have a ...

  10. Arctic avalanche dynamics

    Science.gov (United States)

    Prokop, Alexander; Eiken, Mari; Ganaus, Kerstin; Rubensdotter, Lena

    2017-04-01

    Since the avalanche disaster December 19th, 2015 in Longyearbyen (Svalbard) happened, where two people were killed within settlements, the dynamic of avalanches in arctic regions is of increasing interest for hazard mapping in such areas. To investigate the flow behavior of arctic avalanches we focused on avalanches that occurred in Central Svalbard. In this regions historic avalanche events can be analyzed due to their deposition behavior visible on geomorphological maps in the run-out area of the avalanches. To get an idea about possible snow mass that was involved in the avalanches we measured the snow volume balance of recent avalanches (winters 2015/16) via terrestrial laser scanning. In this way we gained reasonable data to set calibration and input parameters for dynamic avalanche modeling. Using state of the art dynamic avalanche models allowed us to back calculate how much snow was involved in the historic avalanches that we identified on the geomorphological maps and what the return period of those events are. In our presentation we first explain our methodology; we discuss arctic avalanche behavior of the avalanches measured via terrestrial laser scanning and how the dynamic avalanche models performed for those case examples. Finally we conclude how our results can improve avalanche hazard mapping for arctic regions.

  11. Vulnerability and adaptation to climate change in the arctic (VACCA): Implementing recommendations

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report provides recommendations for how Norway's government could move forward with the results from the Arctic Council supported VACCA project, suggesting how concrete activities may be implemented and applied to policy and practice. Based on the results of interviews with Arctic peoples and people involved in Arctic work, combined with desk studies of relevant literature, four Arctic contexts are defined within the dividing lines coastal/non-coastal and urban/non-urban. This report provides up to five concrete recommendations within each context, recommendations for cross-contextual action, and specific projects for further research and action.(auth)

  12. The Arctic Circle

    Science.gov (United States)

    McDonald, Siobhan

    2016-04-01

    My name is Siobhan McDonald. I am a visual artist living and working in Dublin. My studio is based in The School of Science at University College Dublin where I was Artist in Residence 2013-2015. A fascination with time and the changeable nature of landmass has led to ongoing conversations with scientists and research institutions across the interweaving disciplines of botany, biology and geology. I am developing a body of work following a recent research trip to the North Pole where I studied the disappearing landscape of the Arctic. Prompted by my experience of the Arctic shelf receding, this new work addresses issues of the instability of the earth's materiality. The work is grounded in an investigation of material processes, exploring the dynamic forces that transform matter and energy. This project combines art and science in a fascinating exploration of one of the Earth's last relatively untouched wilderness areas - the High Arctic to bring audiences on journeys to both real and artistically re-imagined Arctic spaces. CRYSTALLINE'S pivotal process is collaboration: with The European Space Agency; curator Helen Carey; palaeontologist Prof. Jenny McElwain, UCD; and with composer Irene Buckley. CRYSTALLINE explores our desire to make corporeal contact with geological phenomena in Polar Regions. From January 2016, in my collaboration with Jenny McElwain, I will focus on the study of plants and atmospheres from the Arctic regions as far back as 400 million years ago, to explore the essential 'nature' that, invisible to the eye, acts as imaginary portholes into other times. This work will be informed by my arctic tracings of sounds and images recorded in the glaciers of this disappearing frozen landscape. In doing so, the urgencies around the tipping of natural balances in this fragile region will be revealed. The final work will emerge from my forthcoming residency at the ESA in spring 2016. Here I will conduct a series of workshops in ESA Madrid to work with

  13. Tsunami in the Arctic

    Science.gov (United States)

    Kulikov, Evgueni; Medvedev, Igor; Ivaschenko, Alexey

    2017-04-01

    The severity of the climate and sparsely populated coastal regions are the reason why the Russian part of the Arctic Ocean belongs to the least studied areas of the World Ocean. In the same time intensive economic development of the Arctic region, specifically oil and gas industry, require studies of potential thread natural disasters that can cause environmental and technical damage of the coastal and maritime infrastructure of energy industry complex (FEC). Despite the fact that the seismic activity in the Arctic can be attributed to a moderate level, we cannot exclude the occurrence of destructive tsunami waves, directly threatening the FEC. According to the IAEA requirements, in the construction of nuclear power plants it is necessary to take into account the impact of all natural disasters with frequency more than 10-5 per year. Planned accommodation in the polar regions of the Russian floating nuclear power plants certainly requires an adequate risk assessment of the tsunami hazard in the areas of their location. Develop the concept of tsunami hazard assessment would be based on the numerical simulation of different scenarios in which reproduced the hypothetical seismic sources and generated tsunamis. The analysis of available geological, geophysical and seismological data for the period of instrumental observations (1918-2015) shows that the highest earthquake potential within the Arctic region is associated with the underwater Mid-Arctic zone of ocean bottom spreading (interplate boundary between Eurasia and North American plates) as well as with some areas of continental slope within the marginal seas. For the Arctic coast of Russia and the adjacent shelf area, the greatest tsunami danger of seismotectonic origin comes from the earthquakes occurring in the underwater Gakkel Ridge zone, the north-eastern part of the Mid-Arctic zone. In this area, one may expect earthquakes of magnitude Mw ˜ 6.5-7.0 at a rate of 10-2 per year and of magnitude Mw ˜ 7.5 at a

  14. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  15. Material point methods applied to one-dimensional shock waves and dual domain material point method with sub-points

    Science.gov (United States)

    Dhakal, Tilak R.; Zhang, Duan Z.

    2016-11-01

    Using a simple one-dimensional shock problem as an example, the present paper investigates numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the dual domain material point (DDMP) method. For a weak isothermal shock of ideal gas, the MPM cannot be used with accuracy. With a small number of particles per cell, GIMP and CPDI produce reasonable results. However, as the number of particles increases the methods fail to converge and produce pressure spikes. The DDMP method behaves in an opposite way. With a small number of particles per cell, DDMP results are unsatisfactory. As the number of particles increases, the DDMP results converge to correct solutions, but the large number of particles needed for convergence makes the method very expensive to use in these types of shock wave problems in two- or three-dimensional cases. The cause for producing the unsatisfactory DDMP results is identified. A simple improvement to the method is introduced by using sub-points. With this improvement, the DDMP method produces high quality numerical solutions with a very small number of particles. Although in the present paper, the numerical examples are one-dimensional, all derivations are for multidimensional problems. With the technique of approximately tracking particle domains of CPDI, the extension of this sub-point method to multidimensional problems is straightforward. This new method preserves the conservation properties of the DDMP method, which conserves mass and momentum exactly and conserves energy to the second order in both spatial and temporal discretizations.

  16. FOUR SQUARE WRITING METHOD APPLIED IN PRODUCT AND PROCESS BASED APPROACHES COMBINATION TO TEACHING WRITING DISCUSSION TEXT

    Directory of Open Access Journals (Sweden)

    Vina Agustiana

    2017-12-01

    Full Text Available Four Square Writing Method is a writing method which helps students in organizing concept to write by using a graphic organizer. This study aims to examine the influence of applying FSWM in combination of product and process based approaches to teaching writing discussion texts toward students’ writing skill, the teaching-learning writing process and the students’ attitude toward the implementation of the writing method. This study applies a mixed-method through applying an embedded design. 26 EFL university students of a private university in West Java, Indonesia, are involved in the study. There are 3 kinds of instrument used, namely tests (pre and post-test, field notes, and questionnaires. Data taken from students’ writing test are analyzed statistically to identify the influence of applying the writing method toward students’ writing skill; data taken from field notes are analyzed qualitatively to examine the learning writing activities at the time the writing method is implemented; and data taken from questionnaires are analyzed descriptive statistic to explore students’ attitude toward the implementation of the writing method. Regarding the result of paired t-test, the writing method is effective in improving students’ writing skill since level of significant (two-tailed is less than alpha (0.000<0.05. Furthermore, the result taken from field notes shows that each steps applied and graphic organizer used in the writing method lead students to compose discussion texts which meet a demand of genre. In addition, regard with the result taken from questionnaire, the students show highly positive attitude toward the treatment since the mean score is 4.32.

  17. A global method for calculating plant CSR ecological strategies applied across biomes world-wide

    NARCIS (Netherlands)

    Pierce, S.; Negreiros, D.; Cerabolini, B.E.L.; Kattge, J.; Díaz, S.; Kleyer, M.; Shipley, B.; Wright, S.J.; Soudzilovskaia, N.A.; Onipchenko, V.G.; van Bodegom, P.M.; Frenette-Dussault, C.; Weiher, E.; Pinho, B.X.; Cornelissen, J.H.C.; Grime, J.P.; Thompson, K.; Hunt, R.; Wilson, P.J.; Buffa, G.; Nyakunga, O.C.; Reich, P.B.; Caccianiga, M.; Mangili, F.; Ceriani, R.M.; Luzzaro, A.; Brusa, G.; Siefert, A.; Barbosa, N.P.U.; Chapin III, F.S.; Cornwell, W.K.; Fang, Jingyun; Wilson Fernandez, G.; Garnier, E.; Le Stradic, S.; Peñuelas, J.; Melo, F.P.L.; Slaviero, A.; Tabarrelli, M.; Tampucci, D.

    2017-01-01

    Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA)

  18. Structure analysis of interstellar clouds - II. Applying the Delta-variance method to interstellar turbulence

    NARCIS (Netherlands)

    Ossenkopf, V.; Krips, M.; Stutzki, J.

    Context. The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. It has been applied both to simulations of interstellar turbulence and to observed molecular cloud maps. In Paper I we proposed essential

  19. Evaluation of Two Fitting Methods Applied for Thin-Layer Drying of Cape Gooseberry Fruits

    Directory of Open Access Journals (Sweden)

    Erkan Karacabey

    Full Text Available ABSTRACT Drying data of cape gooseberry was used to compare two fitting methods: namely 2-step and 1-step methods. Literature data was also used to confirm the results. To demonstrate the applicability of these methods, two primary models (Page, Two-term-exponential were selected. Linear equation was used as secondary model. As well-known from the previous modelling studies on drying, 2-step method required at least two regressions: One is primary model and one is secondary (if you have only one environmental condition such as temperature. On the other hand, one regression was enough for 1-step method. Although previous studies on kinetic modelling of drying of foods were based on 2-step method, this study indicated that 1-step method may also be a good alternative with some advantages such as drawing an informative figure and reducing time of calculations.

  20. Geochronology and geochemistry by nuclear tracks method: some utilization examples in geologic applied

    International Nuclear Information System (INIS)

    Poupeau, G.; Soliani Junior, E.

    1988-01-01

    This article discuss some applications of the 'nuclear tracks method' in geochronology, geochemistry and geophysic. In geochronology, after rapid presentation of the dating principles by 'Fission Track' and the kinds of geological events mensurable by this method, is showed some application in metallogeny and in petroleum geolocy. In geochemistry the 'fission tracks' method utilizations are related with mining prospecting and uranium prospecting. In geophysics an important application is the earthquake prevision, through the Ra 222 emanations continous control. (author) [pt

  1. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  2. Applying formal method to design of nuclear power plant embedded protection system

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Kim, Il Gon; Sung, Chang Hoon; Choi, Jin Young; Lee, Na Young

    2001-01-01

    Nuclear power embedded protection systems is a typical safety-critical system, which detects its failure and shutdowns its operation of nuclear reactor. These systems are very dangerous so that it absolutely requires safety and reliability. Therefore nuclear power embedded protection system should fulfill verification and validation completely from the design stage. To develop embedded system, various V and V method have been provided and especially its design using Formal Method is studied in other advanced country. In this paper, we introduce design method of nuclear power embedded protection systems using various Formal-Method in various respect following nuclear power plant software development guideline

  3. Methodological and methodical aspects of studying the social well-being of the population of the Arctic zone of the Russian Federation in the context of its value orientation

    Directory of Open Access Journals (Sweden)

    Anton M. Maximov

    2017-12-01

    Full Text Available The article considers the methodology problems of studying the population social well-being in relation to its hierarchies of values and attitudes. Social well-being is interpreted as an integral indicator with two aspects. First, social well-being represents the objective parameters of the quality of life related to the state of the socio-economic system of a society, the level of infrastructure development, social security and the quantity of political rights. Secondly, an evaluation of subjective well-being, including the overall satisfaction with life and social optimism. The article gives a general description of the main international and Russian methods for measuring the quality of life. The necessity of modification of existing methods is shown. It's proposed to be implemented by incorporating indicators that represent the specifics of living conditions in the Arctic, along with the preservation of universal tools for measuring social well-being. The characteristic of the international measurement methods of quality of life in the Arctic is given as an example of such modification. The article offers the idea that the study of social well-being should be done out in the context of the value orientation studies, because the latter is an important part of the individuals' interpretation of socio-economic, political and legal situation. It's proposed to use the additional variables, affecting the state of social well-being, such as personal motivational and value characteristics (dominant terminal and instrumental values and culturally determinated value-behavioral imperatives common in a society.

  4. A robust moving mesh finite volume method applied to 1D hyperbolic conservation laws from magnetohydrodynamics

    NARCIS (Netherlands)

    Dam, A. van; Zegeling, P.A.

    2006-01-01

    In this paper we describe a one-dimensional adaptive moving mesh method and its application to hyperbolic conservation laws from magnetohydrodynamics (MHD). The method is robust, because it employs automatic control of mesh adaptation when a new model is considered, without manually-set

  5. Statistical methods applied to gamma-ray spectroscopy algorithms in nuclear security missions.

    Science.gov (United States)

    Fagan, Deborah K; Robinson, Sean M; Runkle, Robert C

    2012-10-01

    Gamma-ray spectroscopy is a critical research and development priority to a range of nuclear security missions, specifically the interdiction of special nuclear material involving the detection and identification of gamma-ray sources. We categorize existing methods by the statistical methods on which they rely and identify methods that have yet to be considered. Current methods estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty, which may be significantly more complex. Thus, significantly improving algorithm performance may require greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods, could reduce decision uncertainty by rigorously and comprehensively incorporating all sources of uncertainty. Application of such methods should further meet the needs of nuclear security missions by improving upon the existing numerical infrastructure for which these analyses have not been conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Adjoint Weighting Methods Applied to Monte Carlo Simulations of Applications and Experiments in Nuclear Criticality

    Energy Technology Data Exchange (ETDEWEB)

    Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-11

    The goals of this project are to develop Monte Carlo radiation transport methods and simulation software for engineering analysis that are robust, efficient and easy to use; and provide computational resources to assess and improve the predictive capability of radiation transport methods and nuclear data.

  7. T2-01 A Method for Prioritizing Chemical Hazards in Food applied to Antibiotics

    NARCIS (Netherlands)

    Asselt, van E.D.; Spiegel, van der M.; Noordam, M.Y.; Pikkemaat, M.G.; Fels, van der H.J.

    2014-01-01

    Introduction: Part of risk based control is the prioritization of hazard-food combinations for monitoring food safety. There are currently many methods for ranking microbial hazards ranging from quantitative to qualitative methods, but there is hardly any information available for prioritizing

  8. A New Machine Classification Method Applied to Human Peripheral Blood Leukocytes.

    Science.gov (United States)

    Rorvig, Mark E.; And Others

    1993-01-01

    Discusses pattern classification of images by computer and describes the Two Domain Method in which expert knowledge is acquired using multidimensional scaling of judgments of dissimilarities and linear mapping. An application of the Two Domain Method that tested its power to discriminate two patterns of human blood leukocyte distribution is…

  9. Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2007-01-01

    data assimilation methods are used. The main idea, on which the variational data assimilation methods are based, is pretty general. A functional is formed by using a weighted inner product of differences of model results and measurements. The value of this functional is to be minimized. Forward...

  10. The quasi-exactly solvable potentials method applied to the three-body problem

    International Nuclear Information System (INIS)

    Chafa, F.; Chouchaoui, A.; Hachemane, M.; Ighezou, F.Z.

    2007-01-01

    The quasi-exactly solved potentials method is used to determine the energies and the corresponding exact eigenfunctions for three families of potentials playing an important role in the description of interactions occurring between three particles of equal mass. The obtained results may also be used as a test in evaluating the performance of numerical methods

  11. A linear perturbation computation method applied to hydrodynamic instability growth predictions in ICF targets

    International Nuclear Information System (INIS)

    Clarisse, J.M.; Boudesocque-Dubois, C.; Leidinger, J.P.; Willien, J.L.

    2006-01-01

    A linear perturbation computation method is used to compute hydrodynamic instability growth in model implosions of inertial confinement fusion direct-drive and indirect-drive designed targets. Accurate descriptions of linear perturbation evolutions for Legendre mode numbers up to several hundreds have thus been obtained in a systematic way, motivating further improvements of the physical modeling currently handled by the method. (authors)

  12. China in the Arctic: interests, actions and challenges

    Directory of Open Access Journals (Sweden)

    Njord Wegge

    2014-07-01

    Full Text Available This article gives an overview of China’s interest in and approach to the Arctic region. The following questions are raised: 1.Why is China getting involved in the Arctic, 2. How is China’s engagement in the Arctic playing out? 3, What are the most important issues that need to be solved in order for China to increase its relevance and importance as a political actor and partner in the Arctic. In applying a rationalist approach when answering the research questions, I identify how China in the last few years increasingly has been accepted as a legitimate stakeholder in the Arctic, with important stakes and activities in areas such as shipping, resource utilization and environmental science.  The article concludes with pointing out some issues that remain to be solved including Chinas role in issues of global politics, the role of observers in the Arctic Council as well as pointing out how China itself needs to decide important aspects of their future role in the region.

  13. Heterogeneity among violence-exposed women: applying person-oriented research methods.

    Science.gov (United States)

    Nurius, Paula S; Macy, Rebecca J

    2008-03-01

    Variability of experience and outcomes among violence-exposed people pose considerable challenges toward developing effective prevention and treatment protocols. To address these needs, the authors present an approach to research and a class of methodologies referred to as person oriented. Person-oriented tools support assessment of meaningful patterns among people that distinguish one group from another, subgroups for whom different interventions are indicated. The authors review the conceptual base of person-oriented methods, outline their distinction from more familiar variable-oriented methods, present descriptions of selected methods as well as empirical applications of person-oriented methods germane to violence exposure, and conclude with discussion of implications for future research and translation between research and practice. The authors focus on violence against women as a population, drawing on stress and coping theory as a theoretical framework. However, person-oriented methods hold utility for investigating diversity among violence-exposed people's experiences and needs across populations and theoretical foundations.

  14. Applying cognitive developmental psychology to middle school physics learning: The rule assessment method

    Science.gov (United States)

    Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.

    2013-01-01

    Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.

  15. A multiparametric method of interpolation using WOA05 applied to anthropogenic CO2 in the Atlantic

    Directory of Open Access Journals (Sweden)

    Anton Velo

    2010-11-01

    Full Text Available This paper describes the development of a multiparametric interpolation method and its application to anthropogenic carbon (CANT in the Atlantic, calculated by two estimation methods using the CARINA database. The multiparametric interpolation proposed uses potential temperature (θ, salinity, conservative ‘NO’ and ‘PO’ as conservative parameters for the gridding, and the World Ocean Atlas (WOA05 as a reference for the grid structure and the indicated parameters. We thus complement CARINA data with WOA05 database in an attempt to obtain better gridded values by keeping the physical-biogeochemical sea structures. The algorithms developed here also have the prerequisite of being simple and easy to implement. To test the improvements achieved, a comparison between the proposed multiparametric method and a pure spatial interpolation for an independent parameter (O2 was made. As an application case study, CANT estimations by two methods (φCTº and TrOCA were performed on the CARINA database and then gridded by both interpolation methods (spatial and multiparametric. Finally, a calculation of CANT inventories for the whole Atlantic Ocean was performed with the gridded values and using ETOPO2v2 as the sea bottom. Thus, the inventories were between 55.1 and 55.2 Pg-C with the φCTº method and between 57.9 and 57.6 Pg-C with the TrOCA method.

  16. THE COST MANAGEMENT BY APPLYING THE STANDARD COSTING METHOD IN THE FURNITURE INDUSTRY-Case study

    Directory of Open Access Journals (Sweden)

    Radu Mărginean

    2013-06-01

    Full Text Available Among the modern calculation methods used in managerial accounting, with a large applicability in the industrial production field, we can find the standard costing method. This managerial approach of cost calculation has a real value in the managerial accounting field, due to its usefulness in forecasting production costs, helping the managers in the decision making process. The standard costing method in managerial accounting is part of modern managerial accounting methods, used in many enterprises with production activity. As research objectives for this paper, we propose studying the possibility of implementing this modern method of cost calculation in a company from the Romanian furniture industry, using real financial data. In order to achieve this aim, we used some specialized literature in the field of managerial accounting, showing the strengths and weaknesses of this method. The case study demonstrates that the standard costing modern method of cost calculation has full applicability in our case, and in conclusion it has a real value in the cost management process for enterprises in the Romanian furniture industry.

  17. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    Directory of Open Access Journals (Sweden)

    Tsung-Ming Yang

    2014-04-01

    Full Text Available Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  18. Applying the Taguchi method to river water pollution remediation strategy optimization.

    Science.gov (United States)

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  19. APPLIED BEHAVIOUR ANALYZE METHOD INCREASE SOCIAL INTERACTION CHILDREN WITH AUTISME, 2-5 YEARS OLD

    Directory of Open Access Journals (Sweden)

    Khoridatul Bahiyah

    2017-07-01

    Full Text Available Introduction: Autism is social interaction disorder in children. They were seemingly living in their own world. ABA method was a technique to decrease behaviour disorder or social interaction in autism children. The aimed of this research was to evaluate correlation between ABA method implementation and parents role with social interaction development in children with autism. Method: This research was used a cross sectional with purposive sampling. There  were 22 respondents who met to the inclusion criteria. The independent variable was ABA methode and the dependent variable was social interaction development. Data were collected by using questionnaire and observation, then analyzed by using Spearman Rho Correlation with significance level α≤0.05. Result: The result showed that there was a correlation between ABA method and social interaction development in autism children with p<0.30. Discussion: It can be concluded that ABA method has a correlation with social interaction in autism children. It is recommended that ABA method can be used as a technique to decrease social interaction disorder on autism children.

  20. A local expansion method applied to fast plasma boundary reconstruction for EAST

    Science.gov (United States)

    Guo, Yong; Xiao, Bingjia; Luo, Zhengping

    2011-10-01

    A fast plasma boundary reconstruction technique based on a local expansion method is designed for EAST. It represents the poloidal flux distribution in the vacuum region by a limited number of expansions. The plasma boundary reconstructed by the local expansion method is consistent with EFIT/RT-EFIT results for an arbitrary plasma configuration. On a Linux server with Intel (R) Xeon (TM) CPU 3.2 GHz, the method completes one plasma boundary reconstruction in about 150 µs. This technique is sufficiently reliable and fast for real-time shape control.

  1. Japan’s arctic policy

    Directory of Open Access Journals (Sweden)

    Dmitry V. Streltsov

    2017-01-01

    Full Text Available Abstract: The article is devoted to the public policy of modern Japan in the Arctic. The Japanese government has put forward clear and well-specifi ed targets of the intensifi cation of Japan’s efforts in the economic development of the Arctic region. Among the priorities of the Arctic policy one should mention such areas as the development of maritime transportation, development of hydrocarbon deposits of the Arctic shelf, sea fi shing, as well as the preservation and increase of the sea bioresources.

  2. Review on characterization methods applied to HTR-fuel element components

    International Nuclear Information System (INIS)

    Koizlik, K.

    1976-02-01

    One of the difficulties which on the whole are of no special scientific interest, but which bear a lot of technical problems for the development and production of HTR fuel elements is the proper characterization of the element and its components. Consequently a lot of work has been done during the past years to develop characterization procedures for the fuel, the fuel kernel, the pyrocarbon for the coatings, the matrix and graphite and their components binder and filler. This paper tries to give a status report on characterization procedures which are applied to HTR fuel in KFA and cooperating institutions. (orig.) [de

  3. A pulse stacking method of particle counting applied to position sensitive detection

    International Nuclear Information System (INIS)

    Basilier, E.

    1976-03-01

    A position sensitive particle counting system is described. A cyclic readout imaging device serves as an intermediate information buffer. Pulses are allowed to stack in the imager at very high counting rates. Imager noise is completely discriminated to provide very wide dynamic range. The system has been applied to a detector using cascaded microchannel plates. Pulse height spread produced by the plates causes some loss of information. The loss is comparable to the input loss of the plates. The improvement in maximum counting rate is several hundred times over previous systems that do not permit pulse stacking. (Auth.)

  4. Textbook finite element methods applied to linear wave propagation problems involving conversion and absorption

    International Nuclear Information System (INIS)

    Appert, K.; Vaclavik, J.; Villard, L.; Hellsten, T.

    1986-01-01

    A system of two second-order ordinary differential equations describing wave propagation in a hot plasma is solved numerically by the finite element method involving standard linear and cubic elements. Evanescent short-wavelength modes do not constitute a problem because of the variational nature of the method. It is straightforward to generalize the method to systems of equations with more than two equations. The performance of the method is demonstrated on known physical situations and is measured by investigating the convergence properties. Cubic elements perform much better than linear ones. In an application it is shown that global plasma oscillations might have an importance for the linear wave conversion in the ion-cyclotron range of frequency. (orig.)

  5. Applying Formal Methods to an Information Security Device: An Experience Report

    National Research Council Canada - National Science Library

    Kirby, Jr, James; Archer, Myla; Heitmeyer, Constance

    1999-01-01

    .... This paper describes a case study in which the SCR method was used to specify and analyze a different class of system, a cryptographic system called CD, which must satisfy a large set of security properties...

  6. Review on applied foods and analyzed methods in identification testing of irradiated foods

    International Nuclear Information System (INIS)

    Kim, Kwang Hoon; Lee, Hoo Chul; Park, Sung Hyun; Kim, Soo Jin; Kim, Kwan Soo; Jeong, Il Yun; Lee, Ju Woon; Yook, Hong Sun

    2010-01-01

    Identification methods of irradiated foods have been adopted as official test by EU and Codex. PSL, TL, ESR and GC/MS methods were registered in Korea food code on 2009 and put in force as control system of verification for labelling of food irradiation. But most generally applicable PSL and TL methods are specified applicable foods according to domestic approved items. Unlike these specifications, foods unpermitted in Korea are included in applicable items of ESR and GC/MS methods. According to recent research data, numerous food groups are possible to effective legal control by identification and these items are demanded to permit regulations for irradiation additionally. Especially, the prohibition of irradiation for meats or seafoods is not harmonized with international standards and interacts as trade friction or industrial restrictions due to unprepared domestic regulation. Hence, extension of domestic legal permission for food irradiation can contrive to related industrial development and also can reduce trade friction and enhance international competitiveness

  7. Numerical analysis of the immersed boundary method applied to the flow around a forced oscillating cylinder

    International Nuclear Information System (INIS)

    Pinto, L C; Silvestrini, J H; Schettini, E B C

    2011-01-01

    In present paper, Navier-Stokes and Continuity equations for incompressible flow around an oscillating cylinder were numerically solved. Sixth order compact difference schemes were used to solve the spatial derivatives, while the time advance was carried out through second order Adams Bashforth accurate scheme. In order to represent the obstacle in the flow, the Immersed Boundary Method was adopted. In this method a force term is added to the Navier-Stokes equations representing the body. The simulations present results regarding the hydrodynamic coefficients and vortex wakes in agreement to experimental and numerical previous works and the physical lock-in phenomenon was identified. Comparing different methods to impose the IBM, it can be concluded that no alterations regarding the vortex shedding mode were observed. The Immersed Boundary Method techniques used here can represent the surface of an oscillating cylinder in the flow.

  8. Development of characterization methods applied to radioactive wastes and waste packages

    International Nuclear Information System (INIS)

    Guy, C.; Bienvenu, Ph.; Comte, J.; Excoffier, E.; Dodi, A.; Gal, O.; Gmar, M.; Jeanneau, F.; Poumarede, B.; Tola, F.; Moulin, V.; Jallu, F.; Lyoussi, A.; Ma, J.L.; Oriol, L.; Passard, Ch.; Perot, B.; Pettier, J.L.; Raoux, A.C.; Thierry, R.

    2004-01-01

    This document is a compilation of R and D studies carried out in the framework of the axis 3 of the December 1991 law about the conditioning and storage of high-level and long lived radioactive wastes and waste packages, and relative to the methods of characterization of these wastes. This R and D work has permitted to implement and qualify new methods (characterization of long-lived radioelements, high energy imaging..) and also to improve the existing methods by lowering detection limits and reducing uncertainties of measured data. This document is the result of the scientific production of several CEA laboratories that use complementary techniques: destructive methods and radiochemical analyses, photo-fission and active photonic interrogation, high energy imaging systems, neutron interrogation, gamma spectroscopy and active and passive imaging techniques. (J.S.)

  9. Neutron tomography as a reverse engineering method applied to the IS-60 Rover gas turbine

    CSIR Research Space (South Africa)

    Roos, TH

    2011-09-01

    Full Text Available Probably the most common method of reverse engineering in mechanical engineering involves measuring the physical geometry of a component using a coordinate measuring machine (CMM). Neutron tomography, in contrast, is used primarily as a non...

  10. Considerations on the question of applying ion exchange or reverse osmosis methods in boiler feedwater processing

    International Nuclear Information System (INIS)

    Marquardt, K.; Dengler, H.

    1976-01-01

    This consideration is to show that the method of reverse osmosis presents in many cases an interesting and economical alternative to part and total desolination plants using ion exchangers. The essential advantages of the reverse osmosis are a higher degree of automization, no additional salting of the removed waste water, small constructional volume of the plant as well as favourable operational costs with increasing salt content of the crude water to be processed. As there is a relatively high salt breakthrough compared to the ion exchange method, the future tendency in boiler feedwater processing will be more towards a combination of methods of reverse osmosis and post-purification through continuous ion exchange methods. (orig./LH) [de

  11. Overcoming the Problems of Inconsistent International Migration data : A New Method Applied to Flows in Europe

    NARCIS (Netherlands)

    de Beer, Joop; Raymer, James; van der Erf, Rob; van Wissen, Leo

    2010-01-01

    Due to differences in definitions and measurement methods, cross-country comparisons of international migration patterns are difficult and confusing. Emigration numbers reported by sending countries tend to differ from the corresponding immigration numbers reported by receiving countries. In this

  12. Overcoming the problems of inconsistent international migration data: a new method applied to flows in Europe

    NARCIS (Netherlands)

    de Beer, J.A.A.; Raymer, J.; van der Erf, R.F.; van Wissen, L.J.G.

    2010-01-01

    Due to differences in definitions and measurement methods, crosscountry comparisons of international migration patterns are difficult and confusing. Emigration numbers reported by sending countries tend to differ from the corresponding immigration numbers reported by receiving countries. In this

  13. Applied Warfighter Ergonomics: A Research Method for Evaluating Military Individual Equipment

    National Research Council Canada - National Science Library

    Takagi, Koichi

    2005-01-01

    The objective of this research effort is to design and implement a laboratory and establish a research method focused on scientific evaluation of human factors considerations for military individual...

  14. Computational performance of Free Mesh Method applied to continuum mechanics problems

    Science.gov (United States)

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  15. A Belief Network Decision Support Method Applied to Aerospace Surveillance and Battle Management Projects

    National Research Council Canada - National Science Library

    Staker, R

    2003-01-01

    This report demonstrates the application of a Bayesian Belief Network decision support method for Force Level Systems Engineering to a collection of projects related to Aerospace Surveillance and Battle Management...

  16. Galvanokinetic polarization method applied to the pitting corrosion study of stainless steels

    International Nuclear Information System (INIS)

    Le Xuan, Q.; Vu Quang, K.

    1992-01-01

    Galvanokinetic (GK) polarisation method was used to study the pitting corrosion of 316L stainless steel in chloride solution. Current scan rate effect on the pitting characteristic parameters was pointed out. Specific relations between current scan rate and some pitting characteristic parameters, such as critical current density I c , stable current density I s , critical time t c , stable time t s , were established. Some advantages of the GK polarisation method were discussed

  17. A novel method for applying reduced graphene oxide directly to electronic textiles from yarns to fabrics.

    Science.gov (United States)

    Yun, Yong Ju; Hong, Won G; Kim, Wan-Joong; Jun, Yongseok; Kim, Byung Hoon

    2013-10-25

    Conductive, flexible, and durable reduced RGO textiles with a facile preparation method are presented. BSA proteins serve as universal adhesives for improving the adsorption of GO onto any textile, irrespective of the materials and the surface conditions. Using this method, we successfully prepared various RGO textiles based on nylon-6 yarns, cotton yarns, polyester yarns, and nonwoven fabrics. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Sustainable Assessment of Aerosol Pollution Decrease Applying Multiple Attribute Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2016-06-01

    Full Text Available Air pollution with various materials, particularly with aerosols, increases with the advances in technological development. This is a complicated global problem. One of the priorities in achieving sustainable development is the reduction of harmful technological effects on the environment and human health. It is a responsibility of researchers to search for effective methods of reducing pollution. The reliable results can be obtained by combining the approaches used in various fields of science and technology. This paper aims to demonstrate the effectiveness of the multiple attribute decision-making (MADM methods in investigating and solving the environmental pollution problems. The paper presents the study of the process of the evaporation of a toxic liquid based on using the MADM methods. A schematic view of the test setup is presented. The density, viscosity, and rate of the released vapor flow are measured and the dependence of the variation of the solution concentration on its temperature is determined in the experimental study. The concentration of hydrochloric acid solution (HAS varies in the range from 28% to 34%, while the liquid is heated from 50 to 80 °C. The variations in the parameters are analyzed using the well-known VIKOR and COPRAS MADM methods. For determining the criteria weights, a new CILOS (Criterion Impact LOSs method is used. The experimental results are arranged in the priority order, using the MADM methods. Based on the obtained data, the technological parameters of production, ensuring minimum environmental pollution, can be chosen.

  19. Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion

    Directory of Open Access Journals (Sweden)

    Hui Li

    2017-01-01

    Full Text Available Since WorldView-2 (WV-2 images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS and panchromatic (PAN bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies.

  20. Three-Dimensional CST Parameterization Method Applied in Aircraft Aeroelastic Analysis

    Directory of Open Access Journals (Sweden)

    Hua Su

    2017-01-01

    Full Text Available Class/shape transformation (CST method has advantages of adjustable design variables and powerful parametric geometric shape design ability and has been widely used in aerodynamic design and optimization processes. Three-dimensional CST is an extension for complex aircraft and can generate diverse three-dimensional aircraft and the corresponding mesh automatically and quickly. This paper proposes a parametric structural modeling method based on gridding feature extraction from the aerodynamic mesh generated by the three-dimensional CST method. This novel method can create parametric structural model for fuselage and wing and keep the coordination between the aerodynamic mesh and the structural mesh. Based on the generated aerodynamic model and structural model, an automatic process for aeroelastic modeling and solving is presented with the panel method for aerodynamic solver and NASTRAN for structural solver. A reusable launch vehicle (RLV is used to illustrate the process for aeroelastic modeling and solving. The result shows that this method can generate aeroelastic model for diverse complex three-dimensional aircraft automatically and reduce the difficulty of aeroelastic analysis dramatically. It provides an effective approach to make use of the aeroelastic analysis at the conceptual design phase for modern aircraft.

  1. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    Directory of Open Access Journals (Sweden)

    Fernando Gimeno Bellver

    Full Text Available In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems.The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software.Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper. Keywords: Electrical analogy, Network Simulation Method, Josephson junction, Chaos indicator, Fast Fourier Transform

  2. A Method for Evaluation and Comparison of Parallel Robots for Safe Human Interaction, Applied to Robotic TMS

    NARCIS (Netherlands)

    de Jong, Jan Johannes; Stienen, Arno; van der Wijk, V.; Wessels, Martijn; van der Kooij, Herman

    2012-01-01

    Transcranial magnetic stimulation (TMS) is a noninvasive method to modify behaviour of neurons in the brain. TMS is applied by running large currents through a coil close to the scalp. For consistent results it is required to maintain the coil position within millimetres of the targeted location,

  3. Solution of Ge(111)-(4x4)-Ag structure using direct methods applied to X-ray diffraction data

    DEFF Research Database (Denmark)

    Collazo-Davila, C.; Grozea, D.; Marks, L.D.

    1998-01-01

    A structure model for the Ge(111)-(4 x 4)-Ag surface is proposed. The model was derived by applying direct methods to surface X-ray diffraction data. It is a missing top layer reconstruction with six Ag atoms placed on Ge substitutional sites in one triangular subunit of the surface unit cell. A ...

  4. GLYCOHEMOGLOBIN - COMPARISON OF 12 ANALYTICAL METHODS, APPLIED TO LYOPHILIZED HEMOLYSATES BY 101 LABORATORIES IN AN EXTERNAL QUALITY ASSURANCE PROGRAM

    NARCIS (Netherlands)

    WEYKAMP, CW; PENDERS, TJ; MUSKIET, FAJ; VANDERSLIK, W

    Stable lyophilized ethylenediaminetetra-acetic acid (EDTA)-blood haemolysates were applied in an external quality assurance programme (SKZL, The Netherlands) for glycohaemoglobin assays in 101 laboratories using 12 methods. The mean intralaboratory day-to-day coefficient of variation (CV),

  5. Disparities in Arctic Health

    Centers for Disease Control (CDC) Podcasts

    2008-02-04

    Life at the top of the globe is drastically different. Harsh climate devoid of sunlight part of the year, pockets of extreme poverty, and lack of physical infrastructure interfere with healthcare and public health services. Learn about the challenges of people in the Arctic and how research and the International Polar Year address them.  Created: 2/4/2008 by Emerging Infectious Diseases.   Date Released: 2/20/2008.

  6. Comparison of 15 evaporation methods applied to a small mountain lake in the northeastern USA

    Science.gov (United States)

    Rosenberry, D.O.; Winter, T.C.; Buso, D.C.; Likens, G.E.

    2007-01-01

    Few detailed evaporation studies exist for small lakes or reservoirs in mountainous settings. A detailed evaporation study was conducted at Mirror Lake, a 0.15 km2 lake in New Hampshire, northeastern USA, as part of a long-term investigation of lake hydrology. Evaporation was determined using 14 alternate evaporation methods during six open-water seasons and compared with values from the Bowen-ratio energy-budget (BREB) method, considered the standard. Values from the Priestley-Taylor, deBruin-Keijman, and Penman methods compared most favorably with BREB-determined values. Differences from BREB values averaged 0.19, 0.27, and 0.20 mm d-1, respectively, and results were within 20% of BREB values during more than 90% of the 37 monthly comparison periods. All three methods require measurement of net radiation, air temperature, change in heat stored in the lake, and vapor pressure, making them relatively data intensive. Several of the methods had substantial bias when compared with BREB values and were subsequently modified to eliminate bias. Methods that rely only on measurement of air temperature, or air temperature and solar radiation, were relatively cost-effective options for measuring evaporation at this small New England lake, outperforming some methods that require measurement of a greater number of variables. It is likely that the atmosphere above Mirror Lake was affected by occasional formation of separation eddies on the lee side of nearby high terrain, although those influences do not appear to be significant to measured evaporation from the lake when averaged over monthly periods. ?? 2007 Elsevier B.V. All rights reserved.

  7. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    Science.gov (United States)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  8. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  9. Boundary element method applied to a gas-fired pin-fin-enhanced heat pipe

    Energy Technology Data Exchange (ETDEWEB)

    Andraka, C.E.; Knorovsky, G.A.; Drewien, C.A.

    1998-02-01

    The thermal conduction of a portion of an enhanced surface heat exchanger for a gas fired heat pipe solar receiver was modeled using the boundary element and finite element methods (BEM and FEM) to determine the effect of weld fillet size on performance of a stud welded pin fin. A process that could be utilized by others for designing the surface mesh on an object of interest, performing a conversion from the mesh into the input format utilized by the BEM code, obtaining output on the surface of the object, and displaying visual results was developed. It was determined that the weld fillet on the pin fin significantly enhanced the heat performance, improving the operating margin of the heat exchanger. The performance of the BEM program on the pin fin was measured (as computational time) and used as a performance comparison with the FEM model. Given similar surface element densities, the BEM method took longer to get a solution than the FEM method. The FEM method creates a sparse matrix that scales in storage and computation as the number of nodes (N), whereas the BEM method scales as N{sup 2} in storage and N{sup 3} in computation.

  10. On the spectral nodal methods applied to discrete ordinates eigenvalue problems in Cartesian geometry

    International Nuclear Information System (INIS)

    Abreu, Marcos P. de; Alves Filho, Hermes; Barros, Ricardo C.

    2001-01-01

    We describe hybrid spectral nodal methods for discrete ordinates (SN) eigenvalue problems in Cartesian geometry. These coarse-mesh methods are based on three ingredients: the use of the standard discretized spatial balance SN equations; the use of the non-standard spectral diamond (SD) auxiliary equations in the multiplying regions of the domain, e.g. fuel assemblies; and the use of the non-standard spectral Green's function (SGF) auxiliary equations in the non-multiplying regions of the domain, e.g., the reflector. In slab-geometry the hybrid SD-SGF method generates numerical results that are completely free of spatial truncation errors. In X,Y-geometry, we obtain a system of two 'slab-geometry' SN equations for the node-edge average angular fluxes by transverse-integrating the X,Y-geometry SN equations separately in the y- and then in the x-directions within an arbitrary node of the spatial grid set up on the domain. In this paper, we approximate the transverse leakage terms by constants. These are the only approximations considered in the SD-SGF-constant nodal method, as the source terms, that include scattering and eventually fission events, are treated exactly. We show numerical results to typical model problems to illustrate the accuracy of spectral nodal methods for coarse-mesh SN criticality calculations. (author)

  11. A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems

    Science.gov (United States)

    Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.

    2017-05-01

    Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

  12. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. CONCLUSION......Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete...... qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have...

  13. The increase in the starting torque of PMSM motor by applying of FOC method

    Science.gov (United States)

    Plachta, Kamil

    2017-05-01

    The article presents field oriented control method of synchronous permanent magnet motor equipped in optical sensors. This method allows for a wide range regulation of torque and rotational speed of the electric motor. The paper presents mathematical model of electric motor and vector control method. Optical sensors have shorter time response as compared to the inductive sensors, which allow for faster response of the electronic control system to changes of motor loads. The motor driver is based on the digital signal processor which performs advanced mathematical operations in real time. The appliance of Clark and Park transformation in the software defines the angle of rotor position. The presented solution provides smooth adjustment of the rotational speed in the first operating zone and reduces the dead zone of the torque in the second and third operating zones.

  14. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  15. Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Sigthorsson, Oskar; Elmegaard, Brian

    2017-01-01

    in the low measurement uncertainty scenario, both methods are applicable to locate the causes of the malfunctions. For both the scenarios an outlier limit was found, which determines if it was possible to reject a high relative indicator based on measurement uncertainty. For high uncertainties, the threshold...... on the magnitude of the uncertainties. Two different uncertainty scenarios were evaluated, as the use of repeated measurements yields a lower magnitude of uncertainty. The two methods show similar performance in the presented study for both of the considered measurement uncertainty scenarios. However, only...... value of the relative indicator was 35, whereas for low uncertainties one of the methods resulted in a threshold at 8. Additionally, the contribution of different measuring instruments to the relative indicator in two central components was analysed. It shows that the contribution was component...

  16. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples.

    Science.gov (United States)

    Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K

    2015-12-01

    There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. © 2015 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc.

  17. Boundary Element Method Applied to Added Mass Coefficient Calculation of the Skewed Marine Propellers

    Directory of Open Access Journals (Sweden)

    Yari Ehsan

    2016-04-01

    Full Text Available The paper mainly aims to study computation of added mass coefficients for marine propellers. A three-dimensional boundary element method (BEM is developed to predict the propeller added mass and moment of inertia coefficients. Actually, only few experimental data sets are available as the validation reference. Here the method is validated with experimental measurements of the B-series marine propeller. The behavior of the added mass coefficients predicted based on variation of geometric and flow parameters of the propeller is calculated and analyzed. BEM is more accurate in obtaining added mass coefficients than other fast numerical methods. All added mass coefficients are nondimensionalized by fluid density, propeller diameter, and rotational velocity. The obtained results reveal that the diameter, expanded area ratio, and thickness have dominant influence on the increase of the added mass coefficients.

  18. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  19. Analysis of Steel Wire Rope Diagnostic Data Applying Multi-Criteria Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2018-02-01

    Full Text Available Steel ropes are complex flexible structures used in many technical applications, such as elevators, cable cars, and funicular cabs. Due to the specific design and critical safety requirements, diagnostics of ropes remains an important issue. Broken wire number in the steel ropes is limited by safety standards when they are used in the human lifting and carrying installations. There are some practical issues on loose wires—firstly, it shows end of lifetime of the entire rope, independently of wear, lubrication or wrong winding on the drums or through pulleys; and, secondly, it can stick in the tight pulley—support gaps and cause deterioration of rope structure up to birdcage formations. Normal rope operation should not generate broken wires, so increasing of their number shows a need for rope installation maintenance. This paper presents a methodology of steel rope diagnostics and the results of analysis using multi-criteria analysis methods. The experimental part of the research was performed using an original test bench to detect broken wires on the rope surface by its vibrations. Diagnostics was performed in the range of frequencies from 60 to 560 Hz with a pitch of 50 Hz. The obtained amplitudes of the broken rope wire vibrations, different from the entire rope surface vibration parameters, was the significant outcome. Later analysis of the obtained experimental results revealed the most significant values of the diagnostic parameters. The evaluation of the power of the diagnostics was implemented by using multi-criteria decision-making (MCDM methods. Various decision-making methods are necessary due to unknown efficiencies with respect to the physical phenomena of the evaluated processes. The significance of the methods was evaluated using objective methods from the structure of the presented data. Some of these methods were proposed by authors of this paper. Implementation of MCDM in diagnostic data analysis and definition of the

  20. An Analysis of Methods Section of Research Reports in Applied Linguistics

    Directory of Open Access Journals (Sweden)

    Patrícia Marcuzzo

    2011-10-01

    Full Text Available This work aims at identifying analytical categories and research procedures adopted in the analysis of research article in Applied Linguistics/EAP in order to propose a systematization of the research procedures in Genre Analysis. For that purpose, 12 research reports and interviews with four authors were analyzed. The analysis showed that the studies are concentrated on the investigation of the macrostructure or on the microstructure of research articles in different fields. Studies about the microstructure report exclusively the analysis of grammatical elements, and studies about the macrostructure investigate the language with the purpose of identifying patterns of organization in written discourse. If the objective of these studies is in fact to develop a genre analysis in order to contribute to reading and writing teaching in EAP, these studies should include an ethnographic perspective that analyzes the genre based on its context.

  1. Color changes in wood during heating: kinetic analysis by applying a time-temperature superposition method

    Science.gov (United States)

    Matsuo, Miyuki; Yokoyama, Misao; Umemura, Kenji; Gril, Joseph; Yano, Ken'ichiro; Kawai, Shuichi

    2010-04-01

    This paper deals with the kinetics of the color properties of hinoki ( Chamaecyparis obtusa Endl.) wood. Specimens cut from the wood were heated at 90-180°C as accelerated aging treatment. The specimens completely dried and heated in the presence of oxygen allowed us to evaluate the effects of thermal oxidation on wood color change. Color properties measured by a spectrophotometer showed similar behavior irrespective of the treatment temperature with each time scale. Kinetic analysis using the time-temperature superposition principle, which uses the whole data set, was successfully applied to the color changes. The calculated values of the apparent activation energy in terms of L *, a *, b *, and Δ E^{*}_{ab} were 117, 95, 114, and 113 kJ/mol, respectively, which are similar to the values of the literature obtained for other properties such as the physical and mechanical properties of wood.

  2. Microbeam high-resolution diffraction and x-ray standing wave methods applied to semiconductor structures

    International Nuclear Information System (INIS)

    Kazimirov, A; Bilderback, D H; Huang, R; Sirenko, A; Ougazzaden, A

    2004-01-01

    A new approach to conditioning x-ray microbeams for high angular resolution x-ray diffraction and scattering techniques is introduced. We combined focusing optics (one-bounce imaging capillary) and post-focusing collimating optics (miniature Si(004) channel-cut crystal) to generate an x-ray microbeam with a size of 10 μm and ultimate angular resolution of 14 μrad. The microbeam was used to analyse the strain in sub-micron thick InGaAsP epitaxial layers grown on an InP(100) substrate by the selective area growth technique in narrow openings between the oxide stripes. For the structures for which the diffraction peaks from the substrate and the film overlap, the x-ray standing wave technique was applied for precise measurements of the strain with a Δd/d resolution of better than 10 -4 . (rapid communication)

  3. A comparison of several practical smoothing methods applied to Auger electron energy distributions and line scans

    International Nuclear Information System (INIS)

    Yu, K.S.; Prutton, M.; Larson, L.A.; Pate, B.B.; Poppa, H.

    1982-01-01

    Data-fitting routines utilizing nine-point least-squares quadratic, stiff spline, and piecewise least-squares polynomial methods have been compared on noisy Auger spectra and line scans. The spline-smoothing technique has been found to be the most useful and practical, allowing information to be extracted with excellent integrity from model Auger data having close to unity signal-to-noise ratios. Automatic determination of stiffness parameters is described. A comparison of the relative successes of these smoothing methods, using artificial data, is given. Applications of spline smoothing are presented to illustrate its effectiveness for difference spectra and for noisy Auger line scans. (orig.)

  4. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    When performing a line scan using optical coherence tomography (OCT), the distance between the successive scan lines is often large compared to the resolution along each scan line. If two sets of such line scans are acquired orthogonal to each other, intensity values are known along the lines...... of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... scans, acquired such that the lines of the second scan are orthogonal to the first....

  5. Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.

    Science.gov (United States)

    Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J

    2005-01-01

    A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.

  6. Lagrange polynomial interpolation method applied in the calculation of the J({xi},{beta}) function

    Energy Technology Data Exchange (ETDEWEB)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis, RJ (Brazil)]. E-mails: munhoz.vf@gmail.com; dpalma@cefeteq.br; Martinez, Aquilino Senra [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE) (COPPE). Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br

    2008-07-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  7. Lagrange polynomial interpolation method applied in the calculation of the J(ξ,β) function

    International Nuclear Information System (INIS)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro; Martinez, Aquilino Senra

    2008-01-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  8. A combined evidence Bayesian method for human ancestry inference applied to Afro-Colombians.

    Science.gov (United States)

    Rishishwar, Lavanya; Conley, Andrew B; Vidakovic, Brani; Jordan, I King

    2015-12-15

    Uniparental genetic markers, mitochondrial DNA (mtDNA) and Y chromosomal DNA, are widely used for the inference of human ancestry. However, the resolution of ancestral origins based on mtDNA haplotypes is limited by the fact that such haplotypes are often found to be distributed across wide geographical regions. We have addressed this issue here by combining two sources of ancestry information that have typically been considered separately: historical records regarding population origins and genetic information on mtDNA haplotypes. To combine these distinct data sources, we applied a Bayesian approach that considers historical records, in the form of prior probabilities, together with data on the geographical distribution of mtDNA haplotypes, formulated as likelihoods, to yield ancestry assignments from posterior probabilities. This combined evidence Bayesian approach to ancestry assignment was evaluated for its ability to accurately assign sub-continental African ancestral origins to Afro-Colombians based on their mtDNA haplotypes. We demonstrate that the incorporation of historical prior probabilities via this analytical framework can provide for substantially increased resolution in sub-continental African ancestry assignment for members of this population. In addition, a personalized approach to ancestry assignment that involves the tuning of priors to individual mtDNA haplotypes yields even greater resolution for individual ancestry assignment. Despite the fact that Colombia has a large population of Afro-descendants, the ancestry of this community has been understudied relative to populations with primarily European and Native American ancestry. Thus, the application of the kind of combined evidence approach developed here to the study of ancestry in the Afro-Colombian population has the potential to be impactful. The formal Bayesian analytical framework we propose for combining historical and genetic information also has the potential to be widely applied

  9. Nanoemulsions prepared by a low-energy emulsification method applied to edible films

    Science.gov (United States)

    Catastrophic phase inversion (CPI) was used as a low-energy emulsification method to prepare oil-in-water (O/W) nanoemulsions in a lipid (Acetem)/water/nonionic surfactant (Tween 60) system. CPIs in which water-in-oil emulsions (W/O) are transformed into oil-in-water emulsions (O/W) were induced by ...

  10. 3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr [LATMOS/IPSL, UVSQ Université Paris-Saclay, UPMC Univ. Paris 06, CNRS, Guyancourt (France); Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr [LATMOS/IPSL, UVSQ Université Paris-Saclay, UPMC Univ. Paris 06, CNRS, Guyancourt (France); Leblanc, F. [LATMOS/IPSL, UPMC Univ. Paris 06 Sorbonne Universités, UVSQ, CNRS, Paris (France); Hess, S. [ONERA, Toulouse (France); Mancini, M. [LUTH, Observatoire Paris-Meudon (France)

    2016-03-15

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order to conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.

  11. The element-based finite volume method applied to petroleum reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Cordazzo, Jonas; Maliska, Clovis R.; Silva, Antonio F.C. da; Hurtado, Fernando S.V. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica

    2004-07-01

    In this work a numerical model for simulating petroleum reservoirs using the Element-based Finite Volume Method (EbFVM) is presented. The method employs unstructured grids using triangular and/or quadrilateral elements, such that complex reservoir geometries can be easily represented. Due to the control-volume approach, local mass conservation is enforced, permitting a direct physical interpretation of the resulting discrete equations. It is demonstrated that this method can deal with the permeability maps without averaging procedures, since this scheme assumes uniform properties inside elements, instead inside of control volumes, avoiding the need of weighting the permeability values at the control volumes interfaces. Moreover, it is easy to include the full permeability tensor in this method, which is an important issue in simulating heterogeneous and anisotropic reservoirs. Finally, a comparison among the results obtained using the scheme proposed in this work in the EbFVM framework with those obtained employing the scheme commonly used in petroleum reservoir simulation is presented. It is also shown that the scheme proposed is less susceptible to the grid orientation effect with the increasing of the mobility ratio. (author)

  12. Interactively Applying the Variational Method to the Dihydrogen Molecule: Exploring Bonding and Antibonding

    Science.gov (United States)

    Cruzeiro, Vinícius Wilian D.; Roitberg, Adrian; Polfer, Nicolas C.

    2016-01-01

    In this work we are going to present how an interactive platform can be used as a powerful tool to allow students to better explore a foundational problem in quantum chemistry: the application of the variational method to the dihydrogen molecule using simple Gaussian trial functions. The theoretical approach for the hydrogen atom is quite…

  13. Hybrid finite element method for describing the electrical response of biological cells to applied fields.

    Science.gov (United States)

    Ying, Wenjun; Henriquez, Craig S

    2007-04-01

    A novel hybrid finite element method (FEM) for modeling the response of passive and active biological membranes to external stimuli is presented. The method is based on the differential equations that describe the conservation of electric flux and membrane currents. By introducing the electric flux through the cell membrane as an additional variable, the algorithm decouples the linear partial differential equation part from the nonlinear ordinary differential equation part that defines the membrane dynamics of interest. This conveniently results in two subproblems: a linear interface problem and a nonlinear initial value problem. The linear interface problem is solved with a hybrid FEM. The initial value problem is integrated by a standard ordinary differential equation solver such as the Euler and Runge-Kutta methods. During time integration, these two subproblems are solved alternatively. The algorithm can be used to model the interaction of stimuli with multiple cells of almost arbitrary geometries and complex ion-channel gating at the plasma membrane. Numerical experiments are presented demonstrating the uses of the method for modeling field stimulation and action potential propagation.

  14. The simulation of skin temperature distributions by means of a relaxation method (applied to IR thermography)

    NARCIS (Netherlands)

    Vermey, G.F.

    1975-01-01

    To solve the differential equation for the heat in a two-layer, rectangular piece of skin tissue, a relaxation method, based on a finite difference technique, is used. The temperature distributions on the skin surface are calculated. The results are used to derive a criterion for the resolution for

  15. Applying the Repertory Grid Method for Technology Forecasting: Civil Unmanned Aviation Systems for Germany

    Directory of Open Access Journals (Sweden)

    Eimecke Jörgen

    2017-09-01

    Full Text Available Multistage expert surveys like the Delphi method are proven concepts for technology forecasting that enable the prediction of content-related and temporal development in fields of innovation (e.g., [1, 2]. Advantages of these qualitative multistage methods are a simple and easy to understand concept while still delivering valid results [3]. Nevertheless, the literature also points out certain disadvantages especially in large-scale technology forecasts in particularly abstract fields of innovation [4]. The proposed approach highlights the usefulness of the repertory grid method as an alternative for technology forecasting and as a first step for preference measurement. The basic approach from Baier and Kohler [5] is modified in-so-far that an online survey reduces the cognitive burden for the experts and simplifies the data collection process. Advantages over alternative approaches through its simple structure and through combining qualitative and quantitative methods are shown and an adaption on an actual field of innovation – civil drones in Germany – is done. The measurement of a common terminology for all experts minimizes misunderstandings during the interview and the achievement of an inter-individual comparable level of abstraction is forced by the laddering technique [6] during the interview.

  16. The Feasibility of Applying PBL Teaching Method to Surgery Teaching of Chinese Medicine

    Science.gov (United States)

    Tang, Qianli; Yu, Yuan; Jiang, Qiuyan; Zhang, Li; Wang, Qingjian; Huang, Mingwei

    2008-01-01

    The traditional classroom teaching mode is based on the content of the subject, takes the teacher as the center and gives priority to classroom instruction. While PBL (Problem Based Learning) teaching method breaches the traditional mode, combining the basic science with clinical practice and covering the process from discussion to self-study to…

  17. Applying Critical Race Theory to Group Model Building Methods to Address Community Violence.

    Science.gov (United States)

    Frerichs, Leah; Lich, Kristen Hassmiller; Funchess, Melanie; Burrell, Marcus; Cerulli, Catherine; Bedell, Precious; White, Ann Marie

    2016-01-01

    Group model building (GMB) is an approach to building qualitative and quantitative models with stakeholders to learn about the interrelationships among multilevel factors causing complex public health problems over time. Scant literature exists on adapting this method to address public health issues that involve racial dynamics. This study's objectives are to (1) introduce GMB methods, (2) present a framework for adapting GMB to enhance cultural responsiveness, and (3) describe outcomes of adapting GMB to incorporate differences in racial socialization during a community project seeking to understand key determinants of community violence transmission. An academic-community partnership planned a 1-day session with diverse stakeholders to explore the issue of violence using GMB. We documented key questions inspired by critical race theory (CRT) and adaptations to established GMB "scripts" (i.e., published facilitation instructions). The theory's emphasis on experiential knowledge led to a narrative-based facilitation guide from which participants created causal loop diagrams. These early diagrams depict how violence is transmitted and how communities respond, based on participants' lived experiences and mental models of causation that grew to include factors associated with race. Participants found these methods useful for advancing difficult discussion. The resulting diagrams can be tested and expanded in future research, and will form the foundation for collaborative identification of solutions to build community resilience. GMB is a promising strategy that community partnerships should consider when addressing complex health issues; our experience adapting methods based on CRT is promising in its acceptability and early system insights.

  18. APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP

    Directory of Open Access Journals (Sweden)

    Monalisha Pattnaik

    2014-12-01

    Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights. 

  19. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Science.gov (United States)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  20. Time-dependent geminal method applied to laser-driven beryllium

    Science.gov (United States)

    Lötstedt, Erik; Kato, Tsuyoshi; Yamanouchi, Kaoru

    2018-01-01

    We introduce the time-dependent geminal method, in which the total wave function is written as an antisymmetrized product of time-dependent geminals. A geminal is a two-electron orbital depending on the coordinates of two electrons, and each geminal is expanded as a sum of products of time-dependent one-electron orbitals. The equation of motion for the geminal coefficients similar to the time-dependent Hartree-Fock equation is derived. The evaluation of the largest eigenvalues of the second-order reduced density matrix is proposed as a way to measure the extent of the intergeminal correlation in a time-dependent wave function. Using the time-dependent geminal method, we simulate the evolution of the time-dependent wave function of a beryllium atom exposed to an intense laser pulse at two different wavelengths, 400 and 10 nm. The results are compared to those obtained by the time-dependent Hartree-Fock method and by the multiconfiguration time-dependent Hartree-Fock method.

  1. Element diameter free stability parameters for stabilized methods applied to fluids

    International Nuclear Information System (INIS)

    Franca, L.P.; Madureira, A.L.

    1992-08-01

    Stability parameters for stabilized methods in fluids are suggested. The computation of the largest eigenvalue of a generalized eigenvalue problem replaces controversial definitions of element diameters and inverse estimate constants, used heretofore to compute these stability parameters. The design is employed in the advective-diffusive model, incompressible Navier-Stokes equations and the Stokes problem. (author)

  2. Sustainability Assessment of Power Generation Systems by Applying Exergy Analysis and LCA Methods

    NARCIS (Netherlands)

    Stougie, L.; van der Kooi, H.J.; Valero Delgado, A.

    2015-01-01

    The selection of power generation systems is important when striving for a more sustainable society. However, the results of environmental, economic and social sustainability assessments are subject to new insights into the calculation methods and to changing needs, economic conditions and societal

  3. Morphological and chemical changes of dentin after applying different sterilization methods

    Directory of Open Access Journals (Sweden)

    Cláudio Antonio Talge Carvalho

    Full Text Available Aim The present study evaluated the morphological and chemical changes of dentin produced by different sterilization methods, using scanning electron microscopy (SEM and energy-dispersive X-ray spectrometry (EDS analysis. Material and method Five human teeth were sectioned into 4 samples, each divided into 3 specimens. The specimens were separated into sterilization groups, as follows: wet heat under pressure; cobalt 60 gamma radiation; and control (without sterilization. After sterilization, the 60 specimens were analyzed by SEM under 3 magnifications: 1500X, 5000X, and 10000X. The images were analyzed by 3 calibrated examiners, who assigned scores according to the changes observed in the dentinal tubules: 0 = no morphological change; 1, 2 and 3 = slight, medium and complete obliteration of the dentinal tubules. The chemical composition of dentin was assessed by EDS, with 15 kV incidence and 1 μm penetration. Result The data obtained were submitted to the statistical tests of Kruskall-Wallis and ANOVA. It was observed that both sterilization methods – with autoclave and with cobalt 60 gamma radiation – produced no significant changes to the morphology of the dentinal tubules or to the chemical composition of dentin. Conclusion Both methods may thus be used to sterilize teeth for research conducted in vitro.

  4. Transfer matrix method applied to the parallel assembly of sound absorbing materials.

    Science.gov (United States)

    Verdière, Kévin; Panneton, Raymond; Elkoun, Saïd; Dupont, Thomas; Leclaire, Philippe

    2013-12-01

    The transfer matrix method (TMM) is used conventionally to predict the acoustic properties of laterally infinite homogeneous layers assembled in series to form a multilayer. In this work, a parallel assembly process of transfer matrices is used to model heterogeneous materials such as patchworks, acoustic mosaics, or a collection of acoustic elements in parallel. In this method, it is assumed that each parallel element can be modeled by a 2 × 2 transfer matrix, and no diffusion exists between elements. The resulting transfer matrix of the parallel assembly is also a 2 × 2 matrix that can be assembled in series with the classical TMM. The method is validated by comparison with finite element (FE) simulations and acoustical tube measurements on different parallel/series configurations at normal and oblique incidence. The comparisons are in terms of sound absorption coefficient and transmission loss on experimental and simulated data and published data, notably published data on a parallel array of resonators. From these comparisons, the limitations of the method are discussed. Finally, applications to three-dimensional geometries are studied, where the geometries are discretized as in a FE concept. Compared to FE simulations, the extended TMM yields similar results with a trivial computation time.

  5. Estimating and interpreting latent variable interactions: A tutorial for applying the latent moderated structural equations method.

    Science.gov (United States)

    Maslowsky, Julie; Jager, Justin; Hemken, Douglas

    2015-01-01

    Latent variables are common in psychological research. Research questions involving the interaction of two variables are likewise quite common. Methods for estimating and interpreting interactions between latent variables within a structural equation modeling framework have recently become available. The latent moderated structural equations (LMS) method is one that is built into Mplus software. The potential utility of this method is limited by the fact that the models do not produce traditional model fit indices, standardized coefficients, or effect sizes for the latent interaction, which renders model fitting and interpretation of the latent variable interaction difficult. This article compiles state-of-the-science techniques for assessing LMS model fit, obtaining standardized coefficients, and determining the size of the latent interaction effect in order to create a tutorial for new users of LMS models. The recommended sequence of model estimation and interpretation is demonstrated via a substantive example and a Monte Carlo simulation. Finally, extensions of this method are discussed, such as estimating quadratic effects of latent factors and interactions between latent slope and intercept factors, which hold significant potential for testing and advancing developmental theories.

  6. Applying the decision moving window to risky choice: Comparison of eye-tracking and mousetracing methods

    Directory of Open Access Journals (Sweden)

    Ana M. Franco-Watkins

    2011-12-01

    Full Text Available Currently, a disparity exists between the process-level models decision researchers use to describe and predict decision behavior and the methods implemented and metrics collected to test these models. The current work seeks to remedy this disparity by combining the advantages of work in decision research (mouse-tracing paradigms with contingent information display and cognitive psychology (eye-tracking paradigms from reading and scene perception. In particular, we introduce a new decision moving-window paradigm that presents stimulus information contingent on eye fixations. We provide data from the first application of this method to risky decision making, and show how it compares to basic eye-tracking and mouse-tracing methods. We also enumerate the practical, theoretical, and analytic advantages this method offers above and beyond both mouse-tracing with occlusion and basic eye tracking of information without occlusion. We include the use of new metrics that offer more precision than those typically calculated on mouse-tracing data as well as those not possible or feasible within the mouse-tracing paradigm.

  7. Systems identification: a theoretical method applied to tracer kinetics in aquatic microcosms

    International Nuclear Information System (INIS)

    Halfon, E.; Georgia Univ., Athens

    1974-01-01

    A mathematical model of radionuclide kinetics in a laboratory microcosm was built and the transfer parameters estimated by multiple regression and system identification techniques. Insight into the functioning of the system was obtained from analysis of the model. Methods employed have allowed movements of radioisotopes not directly observable in the experimental systems to be distinguished. Results are generalized to whole ecosystems

  8. Applying Agile Design Sprint Methods in Action Design Research: Prototyping a Health and Wellbeing Platform

    NARCIS (Netherlands)

    Keijzer-Broers, W.J.W.; de Reuver, G.A.

    2016-01-01

    In Action Design Research projects, researchers often face severe constraints in terms of budget and time within the practical setting. Therefore, we argue that ADR researchers may adopt efficient methods to guide their design strategy. While agile and sprint oriented design approaches are becoming

  9. Developing digital technologies for university mathematics by applying participatory design methods

    DEFF Research Database (Denmark)

    Triantafyllou, Eva; Timcenko, Olga

    2013-01-01

    This paper presents our research efforts to develop digital technologies for undergraduate university mathematics. We employ participatory design methods in order to involve teachers and students in the design of such technologies. The results of the first round of our design are included...

  10. Model-based acoustic substitution source methods for assessing shielding measures applied to trains

    NARCIS (Netherlands)

    Geerlings, A.C.; Thompson, D.J.; Verheij, J.W.

    2001-01-01

    A promising means of reducing the rolling noise from trains is local shielding in the form of vehicle-mounted shrouds combined with low trackside barriers. This is much less visually intrusive than classic lineside noise barriers. Various experimental methods have been proposed that allow the

  11. Applying the chronicle workshop as a method for evaluating participatory interventions

    DEFF Research Database (Denmark)

    Poulsen, Signe; Ipsen, Christine; Gish, Liv

    2015-01-01

    in intervention studies. The method was tested in three small and medium-sized companies. Four to six employees participated in each chronicle workshop, which was the last activity of the participatory preventive intervention program PoWRS. The program aims at creating changes which have a positive effect on both...

  12. Applying Cognitive Behavioural Methods to Retrain Children's Attributions for Success and Failure in Learning

    Science.gov (United States)

    Toland, John; Boyle, Christopher

    2008-01-01

    This study involves the use of methods derived from cognitive behavioral therapy (CBT) to change the attributions for success and failure of school children with regard to learning. Children with learning difficulties and/or motivational and self-esteem difficulties (n = 29) were identified by their schools. The children then took part in twelve…

  13. A Method for Robust Strategic Railway Dispatch Applied to a Single Track Line

    DEFF Research Database (Denmark)

    Harrod, Steven

    2013-01-01

    A method is presented for global optimization of a dispatch plan assuming perfect information over a given time horizon. An example problem is solved for the North American case of a single dominant high-speed train sharing a network with a majority flow of slower trains. Initial dispatch priority...

  14. Automatic and efficient methods applied to the binarization of a subway map

    Science.gov (United States)

    Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan

    2015-12-01

    The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.

  15. A Guide on Spectral Methods Applied to Discrete Data in One Dimension

    Directory of Open Access Journals (Sweden)

    Martin Seilmayer

    2017-01-01

    Full Text Available This paper provides an overview about the usage of the Fourier transform and its related methods and focuses on the subtleties to which the users must pay attention. Typical questions, which are often addressed to the data, will be discussed. Such a problem can be the origin of frequency or band limitation of the signal or the source of artifacts, when a Fourier transform is carried out. Another topic is the processing of fragmented data. Here, the Lomb-Scargle method will be explained with an illustrative example to deal with this special type of signal. Furthermore, the time-dependent spectral analysis, with which one can evaluate the point in time when a certain frequency appears in the signal, is of interest. The goal of this paper is to collect the important information about the common methods to give the reader a guide on how to use these for application on one-dimensional data. The introduced methods are supported by the spectral package, which has been published for the statistical environment R prior to this article.

  16. Energy Moment Method Applied to Nuclear Quadrupole Splitting of Nuclear Magnetic Resonance Lines

    DEFF Research Database (Denmark)

    Frank, V

    1962-01-01

    Expressions giving the sum of the energy values, raised to the second and third power, for a nucleus interacting with a static magnetic field and a static electric field gradient are derived. Several applications of this method for obtaining the values of the components of the electric field...

  17. Applying the expansion method in hierarchical functions to the solution of Navier-Stokes equations for incompressible fluids

    International Nuclear Information System (INIS)

    Sabundjian, Gaiane

    1999-01-01

    This work presents a novel numeric method, based on the finite element method, applied for the solution of the Navier-Stokes equations for incompressible fluids in two dimensions in laminar flow. The method is based on the expansion of the variables in almost hierarchical functions. The used expansion functions are based on Legendre polynomials, adjusted in the rectangular elements in a such a way that corner, side and area functions are defined. The order of the expansion functions associated with the sides and with the area of the elements can be adjusted to the necessary or desired degree. This novel numeric method is denominated by Hierarchical Expansion Method. In order to validate the proposed numeric method three well-known problems of the literature in two dimensions are analyzed. The results show the method capacity in supplying precise results. From the results obtained in this thesis it is possible to conclude that the hierarchical expansion method can be applied successfully for the solution of fluid dynamic problems that involve incompressible fluids. (author)

  18. Least-square NUFFT methods applied to 2-D and 3-D radially encoded MR image reconstruction.

    Science.gov (United States)

    Song, Jiayu; Liu, Yanhui; Gewalt, Sally L; Cofer, Gary; Johnson, G Allan; Liu, Qing Huo

    2009-04-01

    Radially encoded MRI has gained increasing attention due to its motion insensitivity and reduced artifacts. However, because its samples are collected nonuniformly in the k-space, multidimensional (especially 3-D) radially sampled MRI image reconstruction is challenging. The objective of this paper is to develop a reconstruction technique in high dimensions with on-the-fly kernel calculation. It implements general multidimensional nonuniform fast Fourier transform (NUFFT) algorithms and incorporates them into a k-space image reconstruction framework. The method is then applied to reconstruct from the radially encoded k-space data, although the method is applicable to any non-Cartesian patterns. Performance comparisons are made against the conventional Kaiser-Bessel (KB) gridding method for 2-D and 3-D radially encoded computer-simulated phantoms and physically scanned phantoms. The results show that the NUFFT reconstruction method has better accuracy-efficiency tradeoff than the KB gridding method when the kernel weights are calculated on the fly. It is found that for a particular conventional kernel function, using its corresponding deapodization function as a scaling factor in the NUFFT framework has the potential to improve accuracy. In particular, when a cosine scaling factor is used, the NUFFT method is faster than KB gridding method since a closed-form solution is available and is less computationally expensive than the KB kernel (KB griding requires computation of Bessel functions). The NUFFT method has been successfully applied to 2-D and 3-D in vivo studies on small animals.

  19. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  20. Arctic deepwater development drilling design considerations

    Energy Technology Data Exchange (ETDEWEB)

    Kokkinis, Theodore; Brinkmann, Carl R.; Ding, John; Fenz, Daniel M. [ExxonMobil Upstream Research Company, Houston, Texas (United States)], email: ted.kokkinis@exxonmobil.com, email: carl.r.brinkmann@exxonmobil.com, email: john.ding@exxonmobil.com, email: daniel.m.fenz@exxonmobil.com

    2010-07-01

    In the world, important amounts of oil and gas reserves are north of the Arctic Circle and a large part of it is located offshore in water depths over 100 meters. Accessing those deepwater areas presents important challenges due to the harsh environment and current methods are not viable, year round operations would be required to drill a large number of wells. The aim of this paper is to determine the design requirements for economic development of Arctic deepwater reservoirs and to highlight the new technologies needed to do so. This paper showed that overall system design should integrate a rapid disconnection capacity and a caisson shaped hull with a breaking cone at the waterline. In addition, developing the disconnection, ice management and re-supply systems were found to be the key technical challenges and the development of topsides drilling equipment and of a method of estimation of the ice loads were determined among the technology development required.

  1. New methods to De-noise and Invert Lidar Observations Applied to HSRL and CATS Observations

    Science.gov (United States)

    Marais, W.; Holz, R.

    2017-12-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Our ultimate goal is to develop inversion algorithms for space-based lidar systems. Standard methods for solving atmospheric lidar inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. We started off with our research with developing inversion algorithms for the UW-Madison High Spectral Resolution Lidar (HSRL) system, and we have made progress in developing a denoising algorithm for Cloud-Aerosol Transport System (CATS) lidar data. In our talk we will describe novel methods that have been developed for solving lidar inverse problems with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The new methods are based on state-of-the-art signal processing tools that were originally developed for medical imaging, and have been adapted for atmospheric lidar inverse problems. We will present inverted backscatter and extinction cross-section results of the new method, estimated from the UW-Madison HSRL observations, and we will juxtapose the results against the estimates obtained via the standard inversion method. We will also present denoising results of CATS observations from which the attenuated backscatter cross-section is obtained. We demonstrate the validity of the denoised CATS observations through simulations, and the validity of the HSRL observations are demonstrated through an uncertainty analysis using real data.

  2. Advances in Spectral Nodal Methods applied to SN Nuclear Reactor Global calculations in Cartesian Geometry

    International Nuclear Information System (INIS)

    Barros, R.C.; Filho, H.A.; Oliveira, F.B.S.; Silva, F.C. da

    2004-01-01

    Presented here are the advances in spectral nodal methods for discrete ordinates (SN) eigenvalue problems in Cartesian geometry. These coarse-mesh methods are based on three ingredients: (i) the use of the standard discretized spatial balance SN equations; (ii) the use of the non-standard spectral diamond (SD) auxiliary equations in the multiplying regions of the domain, e.g. fuel assemblies; and (iii) the use of the non-standard spectral Green's function (SGF) auxiliary equations in the non-multiplying regions of the domain, e.g., the reflector. In slab-geometry the hybrid SD-SGF method generates numerical results that are completely free of spatial truncation errors. In X,Y-geometry, we obtain a system of two 'slab-geometry' SN equations for the node-edge average angular fluxes by transverse-integrating the X,Y-geometry SN equations separately in the y- and then in the x-directions within an arbitrary node of the spatial grid set up on the domain. In this paper, we approximate the transverse leakage terms by constants. These are the only approximations considered in the SD-SGF-constant nodal method, as the source terms, that include scattering and eventually fission events, are treated exactly. Moreover, we describe in this paper the progress of the approximate SN albedo boundary conditions for substituting the non-multiplying regions around the nuclear reactor core. We show numerical results to typical model problems to illustrate the accuracy of spectral nodal methods for coarse-mesh SN criticality calculations. (Author)

  3. Applying a phenomenological method of analysis derived from Giorgi to a psychiatric nursing study.

    Science.gov (United States)

    Koivisto, Kaisa; Janhonen, Sirpa; Väisänen, Leena

    2002-08-01

    The experience of mental ill health is fundamentally disempowering. The processes of psychiatric hospital care and treatment may also add to the personal feeling of disempowerment. This disempowerment is partly due to the failure of others to afford a proper hearing to the person's story of his/her experiences and problems in life. Hence, there is a need to investigate patients' experiences of being mentally ill with psychosis and being helped in a psychiatric hospital. This paper describes the application of a phenomenological method of analysis derived from Amadeo Giorgi to an investigation of psychiatric patients' experiences about being mentally ill with psychosis and being helped in a psychiatric hospital ward in Northern Finland. This phenomenological study was conducted with nine voluntary adult patients recovering from psychosis. In 1998, patients were interviewed regarding their experiences of psychosis and being helped. The verbatim transcripts of these interviews were analysed using Giorgi's phenomenological method. Giorgi's method of analysis aims to uncover the meaning of a phenomenon as experienced by a human through the identification of essential themes. Patients' experiences of psychosis and being helped were clustered into a specific description of situated structure and a general description of situated structure. The Giorgian method of phenomenological analysis was a clear-cut process, which gave a structure to the analyses and justified the decisions made while analysing the data. A phenomenological study of this kind encourages psychiatric nurses to focus on patients' experiences. Phenomenological study and Giorgi's method of analysis are applicable while investigating psychiatric patients' experiences and give new knowledge of the experiences of patients and new views of how to meet patients' needs.

  4. The fitness for purpose of analytical methods applied to fluorimetric uranium determination in water matrix

    International Nuclear Information System (INIS)

    Grinman, Ana; Giustina, Daniel; Mondini, Julia; Diodat, Jorge

    2008-01-01

    Full text: This paper describes the steps which should be followed by a laboratory in order to validate the fluorimetric method for natural uranium in water matrix. The validation of an analytical method is a necessary requirement prior accreditation under Standard norm ISO/IEC 17025, of a non normalized method. Different analytical techniques differ in a sort of variables to be validated. Depending on the chemical process, measurement technique, matrix type, data fitting and measurement efficiency, a laboratory must set up experiments to verify reliability of data, through the application of several statistical tests and by participating in Quality Programs (QP) organized by reference laboratories such as the National Institute of Standards and Technology (NIST), National Physics Laboratory (NPL), or Environmental Measurements Laboratory (EML). However, the participation in QP not only involves international reference laboratories, but also, the national ones which are able to prove proficiency to the Argentinean Accreditation Board. The parameters that the ARN laboratory had to validate in the fluorimetric method to fit in accordance with Eurachem guide and IUPAC definitions, are: Detection Limit, Quantification Limit, Precision, Intra laboratory Precision, Reproducibility Limit, Repeatability Limit, Linear Range and Robustness. Assays to fit the above parameters were designed on the bases of statistics requirements, and a detailed data treatment is presented together with the respective tests in order to show the parameters validated. As a final conclusion, the uranium determination by fluorimetry is a reliable method for direct measurement to meet radioprotection requirements in water matrix, within its linear range which is fixed every time a calibration is carried out at the beginning of the analysis. The detection limit ( depending on blank standard deviation and slope) varies between 3 ug U and 5 ug U which yields minimum detectable concentrations (MDC) of

  5. Applying Taguchi method for optimization of the synthesis condition of nano-porous alumina membrane by slip casting method

    Energy Technology Data Exchange (ETDEWEB)

    Barmala, Molood [Department of Chemical Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Moheb, Ahmad, E-mail: ahmad@cc.iut.ac.i [Department of Chemical Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Emadi, Rahmatollah [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of)

    2009-10-19

    In this work thin disc type pure alumina membranes have been prepared by slip casting technique. The colloidal stabilization of micro-sized alumina suspensions with different amount of 1,2-dihydroxy-3,5-benzenedisulfonic acid disodium salt (Tiron) at various suspension concentration were examined and the suspension stability was characterized by measuring sedimentation height. Also the necessary ball milling time (used as a deflocculating process) to prepare defect free membranes was investigated. A statistical experimental design method (Taguchi method with L9 orthogonal array design) was implemented to optimize experimental conditions for the preparation of Al{sub 2}O{sub 3} nano-porous membrane. Sintering temperature, solid content and polyvinyl alcohol (PVA) content were recognized and selected as important effecting parameters. Also structural studies by means of isopropanol adsorption and scanning electron microscopy were carried out on membranes. As the result of Taguchi analysis in this study, sintering temperature was the most influencing parameter on the membrane porosity. Reasonable membrane characteristics were obtained at an optimum temperature of 1400 deg. C, 20% solid content and 20 cc PVA solution per 100 g of alumina powder.

  6. Quasistatic field simulations based on finite elements and spectral methods applied to superconducting magnets

    International Nuclear Information System (INIS)

    Koch, Stephan

    2009-01-01

    This thesis is concerned with the numerical simulation of electromagnetic fields in the quasi-static approximation which is applicable in many practical cases. Main emphasis is put on higher-order finite element methods. Quasi-static applications can be found, e.g., in accelerator physics in terms of the design of magnets required for beam guidance, in power engineering as well as in high-voltage engineering. Especially during the first design and optimization phase of respective devices, numerical models offer a cheap alternative to the often costly assembly of prototypes. However, large differences in the magnitude of the material parameters and the geometric dimensions as well as in the time-scales of the electromagnetic phenomena involved lead to an unacceptably long simulation time or to an inadequately large memory requirement. Under certain circumstances, the simulation itself and, in turn, the desired design improvement becomes even impossible. In the context of this thesis, two strategies aiming at the extension of the range of application for numerical simulations based on the finite element method are pursued. The first strategy consists in parallelizing existing methods such that the computation can be distributed over several computers or cores of a processor. As a consequence, it becomes feasible to simulate a larger range of devices featuring more degrees of freedom in the numerical model than before. This is illustrated for the calculation of the electromagnetic fields, in particular of the eddy-current losses, inside a superconducting dipole magnet developed at the GSI Helmholtzzentrum fuer Schwerionenforschung as a part of the FAIR project. As the second strategy to improve the efficiency of numerical simulations, a hybrid discretization scheme exploiting certain geometrical symmetries is established. Using this method, a significant reduction of the numerical effort in terms of required degrees of freedom for a given accuracy is achieved. The

  7. The boundary element method applied to 3D magneto-electro-elastic dynamic problems

    Science.gov (United States)

    Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.

    2017-11-01

    Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.

  8. An inter-observer Ki67 reproducibility study applying two different assessment methods

    DEFF Research Database (Denmark)

    Laenkholm, Anne-Vibeke; Grabau, Dorthe; Møller Talman, Maj-Lis

    2018-01-01

    INTRODUCTION: In 2011, the St. Gallen Consensus Conference introduced the use of pathology to define the intrinsic breast cancer subtypes by application of immunohistochemical (IHC) surrogate markers ER, PR, HER2 and Ki67 with a specified Ki67 cutoff (>14%) for luminal B-like definition. Reports...... concerning impaired reproducibility of Ki67 estimation and threshold inconsistency led to the initiation of this quality assurance study (2013-2015). The aim of the study was to investigate inter-observer variation for Ki67 estimation in malignant breast tumors by two different quantification methods....... 0.84 (95% CI: 0.80-0.87) by the count method. CONCLUSION: Although the study in general showed a moderate to good inter-observer agreement according to both ICC and Lights Kappa, still major discrepancies were identified in especially the mid-range of observations. Consequently, for now Ki67...

  9. Thermal constitutive matrix applied to asynchronous electrical machine using the cell method

    Science.gov (United States)

    Domínguez, Pablo Ignacio González; Monzón-Verona, José Miguel; Rodríguez, Leopoldo Simón; Sánchez, Adrián de Pablo

    2018-03-01

    This work demonstrates the equivalence of two constitutive equations. One is used in Fourier's law of the heat conduction equation, the other in electric conduction equation; both are based on the numerical Cell Method, using the Finite Formulation (FF-CM). A 3-D pure heat conduction model is proposed. The temperatures are in steady state and there are no internal heat sources. The obtained results are compared with an equivalent model developed using the Finite Elements Method (FEM). The particular case of 2-D was also studied. The errors produced are not significant at less than 0.2%. The number of nodes is the number of the unknowns and equations to resolve. There is no significant gain in precision with increasing density of the mesh.

  10. Fuzzy Comprehensive Evaluation Method Applied in the Real Estate Investment Risks Research

    Science.gov (United States)

    ML(Zhang Minli), Zhang; Wp(Yang Wenpo), Yang

    Real estate investment is a high-risk and high returned of economic activity, the key of real estate analysis is the identification of their types of investment risk and the risk of different types of effective prevention. But, as the financial crisis sweeping the world, the real estate industry also faces enormous risks, how effective and correct evaluation of real estate investment risks becomes the multitudinous scholar concern[1]. In this paper, real estate investment risks were summarized and analyzed, and comparative analysis method is discussed and finally presented fuzzy comprehensive evaluation method, not only in theory has the advantages of science, in the application also has the reliability, for real estate investment risk assessment provides an effective means for investors in real estate investing guidance on risk factors and forecasts.

  11. Protective coating applied to nuclear facilities and testing methods of paints

    International Nuclear Information System (INIS)

    Fukuda, S.; Tsuchiya, Y.

    1974-01-01

    A large amount of paint is used for the coating of RI-handling facilities, such as nuclear power stations. Primary purposes of painting work are to obtain high degrees of anti-contamination and decontamination properties to radioactive substances. There are of course the requirements from rust prevention and ornament, as in ordinary buildings. However, due consideration must be taken in respect of the durability under radiation and the resistance to high temperature and high pressure assuming the occurrence of accidents. The following matters are described: practice of painting in nuclear power stations; specifications of the paintings for buildings and machinery and equipments; problems in painting work; the methods of choosing paints including the tests on anti-contamination and decontamination properties, and the kinds of paints and evaluation of their decontamination property; and problems in choosing methods. (Mori, K.)

  12. Sink strength simulations using the Monte Carlo method: Applied to spherical traps

    Science.gov (United States)

    Ahlgren, T.; Bukonte, L.

    2017-12-01

    The sink strength is an important parameter for the mean-field rate equations to simulate temporal changes in the micro-structure of materials. However, there are noteworthy discrepancies between sink strengths obtained by the Monte Carlo and analytical methods. In this study, we show the reasons for these differences. We present the equations to estimate the statistical error for sink strength calculations and show the way to determine the sink strengths for multiple traps. We develop a novel, very fast Monte Carlo method to obtain sink strengths. The results show that, in addition to the well-known sink strength dependence of the trap concentration, trap radius and the total sink strength, the sink strength also depends on the defect diffusion jump length and the total trap volume fraction. Taking these factors into account, allows us to obtain a very accurate analytic expression for the sink strength of spherical traps.

  13. Method of preparing and applying single stranded DNA probes to double stranded target DNAs in situ

    Science.gov (United States)

    Gray, J.W.; Pinkel, D.

    1991-07-02

    A method is provided for producing single stranded non-self-complementary nucleic acid probes, and for treating target DNA for use therewith. The probe is constructed by treating DNA with a restriction enzyme and an exonuclease to form template/primers for a DNA polymerase. The digested strand is resynthesized in the presence of labeled nucleoside triphosphate precursor. Labeled single stranded fragments are separated from the resynthesized fragments to form the probe. Target DNA is treated with the same restriction enzyme used to construct the probe, and is treated with an exonuclease before application of the probe. The method significantly increases the efficiency and specificity of hybridization mixtures by increasing effective probe concentration by eliminating self-hybridization between both probe and target DNAs, and by reducing the amount of target DNA available for mismatched hybridizations. No Drawings

  14. Applied methods of political research in the structure of public monitoring of the electoral process

    Directory of Open Access Journals (Sweden)

    D. Y. Arabadjyiev

    2015-05-01

    The conclusions of the following are made. Mass and experts surveys are focused on the diagnostics of public and experts’ opinion on an issue, determining by the characteristics of the monitoring field at this stage, on construction and forecasts of further changes in this field, the elections result and so on. Focus groups aimed at involving of members of determined with research subjects groups and experts to discussion of the issues related to monitoring activities, making recommendations for governments and local authorities, preparation of joint documents, resolutions, development of monitoring activities strategy. Exit­poll and quick­count are used on the day of elections for a parallel vote counting and making comparison with the official results. Case­studies is a method to demonstrate the intensity of administrative resources usage, its types, forms, methods and technologies.

  15. Applying Hotspot Detection Methods in Forestry: A Case Study of Chestnut Oak Regeneration

    Directory of Open Access Journals (Sweden)

    Songlin Fei

    2010-01-01

    Full Text Available Hotspot detection has been widely adopted in health sciences for disease surveillance, but rarely in natural resource disciplines. In this paper, two spatial scan statistics (SaTScan and ClusterSeer and a nonspatial classification and regression trees method were evaluated as techniques for identifying chestnut oak (Quercus Montana regeneration hotspots among 50 mixed-oak stands in the central Appalachian region of the eastern United States. Hotspots defined by the three methods had a moderate level of conformity and revealed similar chestnut oak regeneration site affinity. Chestnut oak regeneration hotspots were positively associated with the abundance of chestnut oak trees in the overstory and a moderate cover of heather species (Vaccinium and Gaylussacia spp. but were negatively associated with the abundance of hayscented fern (Dennstaedtia punctilobula and mountain laurel (Kalmia latiforia. In general, hotspot detection is a viable tool for assisting natural resource managers with identifying areas possessing significantly high or low tree regeneration.

  16. Applying Hotspot Detection Methods in Forestry: A Case Study of Chestnut Oak Regeneration

    International Nuclear Information System (INIS)

    Fei, S.

    2010-01-01

    Hotspot detection has been widely adopted in health sciences for disease surveillance, but rarely in natural resource disciplines. In this paper, two spatial scan statistics (SaT Scan and Cluster Seer) and a non spatial classification and regression trees method were evaluated as techniques for identifying chestnut oak (Quercus Montana) regeneration hotspots among 50 mixed-oak stands in the central Appalachian region of the eastern United States. Hotspots defined by the three methods had a moderate level of conformity and revealed similar chestnut oak regeneration site affinity. Chestnut oak regeneration hotspots were positively associated with the abundance of chestnut oak trees in the over story and a moderate cover of heather species (Vaccinium and Gaylussacia spp.) but were negatively associated with the abundance of hay scented fern (Dennstaedtia punctilobula) and mountain laurel (Kalmia latiforia). In general, hotspot detection is a viable tool for assisting natural resource managers with identifying areas possessing significantly high or low tree regeneration.

  17. An automatic image-based modelling method applied to forensic infography.

    Directory of Open Access Journals (Sweden)

    Sandra Zancajo-Blazquez

    Full Text Available This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet and image (visible, infrared, thermal, etc.; (ii automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  18. Indirect scaling methods applied to the identification and quantification of auditory attributes

    DEFF Research Database (Denmark)

    Wickelmaier, Florian

    Auditory attributes, like for example loudness, pitch, sharpness, or tonal prominence, reflect how human listeners perceive their acoustical environment. The identification and of relevant auditory attributes and their quantification are therefore of major concern for different applications...... or the representation of the attributes are derived from modeling the listeners' judgments. The applicability of the developed methods was investigated in a series of experiments which aimed at identifying and quantifying auditory attributes of home-audio reproduction formats (mono, stereo, and multichannel formats...

  19. Applying a mixed-methods evaluation to Healthy Kids, Healthy Communities.

    Science.gov (United States)

    Brownson, Ross C; Kemner, Allison L; Brennan, Laura K

    2015-01-01

    From 2008 to 2014, the Healthy Kids, Healthy Communities (HKHC) national program funded 49 communities across the United States and Puerto Rico to implement healthy eating and active living policy, system, and environmental changes to support healthier communities for children and families, with special emphasis on reaching children at highest risk for obesity on the basis of race, ethnicity, income, or geographic location. Evaluators designed a mixed-methods evaluation to capture the complexity of the HKHC projects, understand implementation, and document perceived and actual impacts of these efforts. Eight complementary evaluation methods addressed 4 primary aims seeking to (1) coordinate data collection for the evaluation through the web-based project management system (HKHC Community Dashboard) and provide training and technical assistance for use of this system; (2) guide data collection and analysis through use of the Assessment and Evaluation Toolkit; (3) conduct a quantitative cross-site impact evaluation among a subset of community partnership sites; and (4) conduct a qualitative cross-site process and impact evaluation among all 49 community partnership sites. Evaluators identified successes and challenges in relation to the following methods: an online performance-monitoring HKHC Community Dashboard system, environmental audits, direct observations, individual and group interviews, partnership and community capacity surveys, group model building, photographs and videos, and secondary data sources (surveillance data and record review). Several themes emerged, including the value of systems approaches, the need for capacity building for evaluation, the value of focusing on upstream and downstream outcomes, and the importance of practical approaches for dissemination. The mixed-methods evaluation of HKHC advances evaluation science related to community-based efforts for addressing childhood obesity in complex community settings. The findings are likely to

  20. Preparation of Biological Samples Containing Metoprolol and Bisoprolol for Applying Methods for Quantitative Analysis

    OpenAIRE

    Corina Mahu Ştefania; Monica Hăncianu; Luminiţa Agoroaei; Anda Cristina Coman Băbuşanu; Elena Butnaru

    2015-01-01

    Arterial hypertension is a complex disease with many serious complications, representing a leading cause of mortality. Selective beta-blockers such as metoprolol and bisoprolol are frequently used in the management of hypertension. Numerous analytical methods have been developed for the determination of these substances in biological fluids, such as liquid chromatography coupled with mass spectrometry, gas chromatography coupled with mass spectrometry, high performance liquid chromatography. ...

  1. SELLING SHARES OF STATE OWNED ENTERPRISES – METHODS APPLIED IN PRIVATIZATION PROCESS AND OPEN ISSUES

    Directory of Open Access Journals (Sweden)

    Edita Čulinović Herc

    2015-01-01

    Full Text Available Recent trends in the EU Member States, as well as in the Republic of Croatia, show an increase in the sale of shares of state owned enterprises. In this paper authors sought to investigate permissive and available techniques of selling those shares. The dilemma is how to achieve the best selling price by choosing one of the prescribed methods without being in breach of the rules concerning state aid and free movement of capital. Moreover, since many state owned enterprises are playing vital role in the Croatian economy, the question arises how to protect national interests and in the same time stay in line with principles of free movement of capital and state aid rules. The Croatian law provides seven methods of selling the shares. When selecting the method, in order to be in compliance with state aid rules, it is necessary to take into account Market Economy Vendor Principle as spelled out in the Guidance Paper adopted by the Commission. Although this Guidance Paper does not represent an official position of the Commission on the issue, its importance is highlighted in the Commission’s decisions and court practices in the European Union. As far as principle of the free movement of capital is concerned, the question arises, whether some permissible exceptions as to it could be a source to protect national interest in companies that are of strategic interest. The paper gives an overview of the recent acquisition of Alstom by General Electric in which France protected its national strategic interests by issuing special decree in which it evokes the application of the permissive exceptions to the principle of the free movement of capital. Although the case is still “in the monitoring of the Commission”, authors explore viability of that method in the Republic of Croatia.

  2. Applying multi-perspective spatial method to designing tree and shrub composition

    Directory of Open Access Journals (Sweden)

    Stoycheva M. S.

    2016-05-01

    Full Text Available the process of designing a harmonious tree and shrub composition requires multi-perspective painting. Not all the landscape designer’s artistic conceptions might be expressed by placing the tallest trees to the center of a room. We suggest a few simple steps, such as using a spatial method of painting and the multi-perspective system, in order to design a good looking composition from two perspectives at an angle 180° relative to each other.

  3. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  4. An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings

    Directory of Open Access Journals (Sweden)

    Jiun-Huei Jang

    2015-11-01

    Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.

  5. Generalized ensemble method applied to study systems with strong first order transitions

    Science.gov (United States)

    Małolepsza, E.; Kim, J.; Keyes, T.

    2015-09-01

    At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.

  6. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  7. Preparation of Biological Samples Containing Metoprolol and Bisoprolol for Applying Methods for Quantitative Analysis

    Directory of Open Access Journals (Sweden)

    Corina Mahu Ştefania

    2015-12-01

    Full Text Available Arterial hypertension is a complex disease with many serious complications, representing a leading cause of mortality. Selective beta-blockers such as metoprolol and bisoprolol are frequently used in the management of hypertension. Numerous analytical methods have been developed for the determination of these substances in biological fluids, such as liquid chromatography coupled with mass spectrometry, gas chromatography coupled with mass spectrometry, high performance liquid chromatography. Due to the complex composition of biological fluids a biological sample pre-treatment before the use of the method for quantitative determination is required in order to remove proteins and potential interferences. The most commonly used methods for processing biological samples containing metoprolol and bisoprolol were identified through a thorough literature search using PubMed, ScienceDirect, and Willey Journals databases. Articles published between years 2005-2015 were reviewed. Protein precipitation, liquid-liquid extraction and solid phase extraction are the main techniques for the extraction of these drugs from plasma, serum, whole blood and urine samples. In addition, numerous other techniques have been developed for the preparation of biological samples, such as dispersive liquid-liquid microextraction, carrier-mediated liquid phase microextraction, hollow fiber-protected liquid phase microextraction, on-line molecularly imprinted solid phase extraction. The analysis of metoprolol and bisoprolol in human plasma, urine and other biological fluids provides important information in clinical and toxicological trials, thus requiring the application of appropriate extraction techniques for the detection of these antihypertensive substances at nanogram and picogram levels.

  8. Analytical methods applied to the study of lattice gauge and spin theories

    International Nuclear Information System (INIS)

    Moreo, Adriana.

    1985-01-01

    A study of interactions between quarks and gluons is presented. Certain difficulties of the quantum chromodynamics to explain the behaviour of quarks has given origin to the technique of lattice gauge theories. First the phase diagrams of the discrete space-time theories are studied. The analysis of the phase diagrams is made by numerical and analytical methods. The following items were investigated and studied: a) A variational technique was proposed to obtain very accurated values for the ground and first excited state energy of the analyzed theory; b) A mean-field-like approximation for lattice spin models in the link formulation which is a generalization of the mean-plaquette technique was developed; c) A new method to study lattice gauge theories at finite temperature was proposed. For the first time, a non-abelian model was studied with analytical methods; d) An abelian lattice gauge theory with fermionic matter at the strong coupling limit was analyzed. Interesting results applicable to non-abelian gauge theories were obtained. (M.E.L.) [es

  9. Differential magnetometer method applied to measurement of geomagnetically induced currents in Southern African power networks

    Science.gov (United States)

    Matandirotya, Electdom; Cilliers, Pierre. J.; Van Zyl, Robert R.; Oyedokun, David T.; de Villiers, Jean

    2016-03-01

    Geomagnetically induced currents (GICs) in conductors connected to the Earth are driven by an electric field produced by a time-varying magnetic field linked to magnetospheric-ionospheric current perturbations during geomagnetic storms. The GIC measurements are traditionally done on the neutral-to-ground connections of power transformers. A method of inferring the characteristics of GIC in power lines using differential magnetic field measurements is presented. Measurements of the GIC in the power lines connected to a particular power transformer are valuable in the verification of the modeling of GIC in the power transmission network. The differential magnetometer method (DMM) is an indirect method used to estimate the GIC in a power line. With the DMM, low-frequency GIC in the power line is estimated from the difference between magnetic field recordings made directly underneath the power line and at some distance away, where the magnetic field of the GIC in the transmission line has negligible effect. Results of the first application of the DMM to two selected sites of the Southern African power transmission network are presented. The results show that good quality GIC measurements are achieved through the DMM using Commercially-Off-The-Shelf magnetometers.

  10. A Review of Multi-Criteria Decision-Making Methods Applied to the Sustainable Bridge Design

    Directory of Open Access Journals (Sweden)

    Vicent Penadés-Plà

    2016-12-01

    Full Text Available The construction of bridges has been necessary for societies since ancient times, when the communication between and within towns, cities or communities was established. Until recently, the economic factor has been the only one considered in the decision-making of any type of construction process for bridges. However, nowadays, the objective should not be just the construction of bridges, but of sustainable bridges. Economic, social and environmental factors, which form the three pillars of sustainability, have been recently added. These three factors usually have conflicting perspectives. The decision-making process allows the conversion of a judgment into a rational procedure to reach a compromise solution. The aim of this paper is to review different methods and sustainable criteria used for decision-making at each life-cycle phase of a bridge, from design to recycling or demolition. This paper examines 77 journal articles for which different methods have been used. The most used methods are briefly described. In addition, a statistical study was carried out on the Multiple Attribute Decision-making papers reviewed.

  11. Experimental evaluation of the strut-and-tie method applied to low-rise concrete walls

    Directory of Open Access Journals (Sweden)

    Julian Carrillo León

    2010-01-01

    Full Text Available The strut-and-tie method (S-T is a practical tool for the seismic design of reinforced concrete elements. Experimental and analytical research with low-rise concrete walls was carried out for assessing the S-T method proposed by the current ACI-318 building code. Four specimens designed to fail during shear and shaking table tests were included in the experimental programme. The variables studied consisted of the type of concrete (normal and cellular weight, the amount of steel web (0.125% and 0.25% and the type of web reinforcement against shear (corrugated bars and welded wire mesh. Wall properties were typical of low-rise housing in Mexico. When the calculated shear strength was compared with the measured one it was found that the S-T method proposed by the ACI-318 building code suitably estimated the shear capacity of the models being studied. However, the wall’s shear failure mode, loading rate, the number of cycles and the cumulative energy dissipated would noticeably affect the degradation in strength of low-rise, reinforced concrete walls.

  12. Single determinant N-representability and the kernel energy method applied to water clusters.

    Science.gov (United States)

    Polkosnik, Walter; Massa, Lou

    2017-10-24

    The Kernel energy method (KEM) is a quantum chemical calculation method that has been shown to provide accurate energies for large molecules. KEM performs calculations on subsets of a molecule (called kernels) and so the computational difficulty of KEM calculations scales more softly than full molecule methods. Although KEM provides accurate energies those energies are not required to satisfy the variational theorem. In this article, KEM is extended to provide a full molecule single-determinant N-representable one-body density matrix. A kernel expansion for the one-body density matrix analogous to the kernel expansion for energy is defined. This matrix is converted to a normalized projector by an algorithm due to Clinton. The resulting single-determinant N-representable density matrix maps to a quantum mechanically valid wavefunction which satisfies the variational theorem. The process is demonstrated on clusters of three to twenty water molecules. The resulting energies are more accurate than the straightforward KEM energy results and all violations of the variational theorem are resolved. The N-representability studied in this article is applicable to the study of quantum crystallography. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Experimental Methods Applied in a Study of Stall Flutter in an Axial Flow Fan

    Directory of Open Access Journals (Sweden)

    John D. Gill

    2004-01-01

    Full Text Available Flutter testing is an integral part of aircraft gas turbine engine development. In typical flutter testing blade mounted sensors in the form of strain gages and casing mounted sensors in the form of light probes (NSMS are used. Casing mounted sensors have the advantage of being non-intrusive and can detect the vibratory response of each rotating blade. Other types of casing mounted sensors can also be used to detect flutter of rotating blades. In this investigation casing mounted high frequency response pressure transducers are used to characterize the part-speed stall flutter response of a single stage unshrouded axial-flow fan. These dynamic pressure transducers are evenly spaced around the circumference at a constant axial location upstream of the fan blade leading edge plane. The pre-recorded experimental data at 70% corrected speed is analyzed for the case where the fan is back-pressured into the stall flutter zone. The experimental data is analyzed using two probe and multi-probe techniques. The analysis techniques for each method are presented. Results from these two analysis methods indicate that flutter occurred at a frequency of 411 Hz with a dominant nodal diameter of 2. The multi-probe analysis technique is a valuable method that can be used to investigate the initiation of flutter in turbomachines.

  14. Fixed Nadir Focus Concentrated Solar Power Applying Reflective Array Tracking Method

    Science.gov (United States)

    Setiawan, B.; DAMayanti, A. M.; Murdani, A.; Habibi, I. I. A.; Wakidah, R. N.

    2018-04-01

    The Sun is one of the most potential renewable energy develoPMent to be utilized, one of its utilization is for solar thermal concentrators, CSP (Concentrated Solar Power). In CSP energy conversion, the concentrator is as moving the object by tracking the sunlight to reach the focus point. This method need quite energy consumption, because the unit of the concentrators has considerable weight, and use large CSP, means the existence of the usage unit will appear to be wider and heavier. The addition of weight and width of the unit will increase the torque to drive the concentrator and hold the wind gusts. One method to reduce energy consumption is direct the sunlight by the reflective array to nadir through CSP with Reflective Fresnel Lens concentrator. The focus will be below the nadir direction, and the position of concentrator will be fixed position even the angle of the sun’s elevation changes from morning to afternoon. So, the energy concentrated maximally, because it has been protected from wind gusts. And then, the possibility of dAMage and changes in focus construction will not occur. The research study and simulation of the reflective array (mechanical method) will show the reflective angle movement. The distance between reflectors and their angle are controlled by mechatronics. From the simulation using fresnel 1m2, and efficiency of solar energy is 60.88%. In restriction, the intensity of sunlight at the tropical circles 1KW/peak, from 6 AM until 6 PM.

  15. AO–MW–PLS method applied to rapid quantification of teicoplanin with near-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Jiemei Chen

    2017-01-01

    Full Text Available Teicoplanin (TCP is an important lipoglycopeptide antibiotic produced by fermenting Actinoplanes teichomyceticus. The change in TCP concentration is important to measure in the fermentation process. In this study, a reagent-free and rapid quantification method for TCP in the TCP–Tris–HCl mixture samples was developed using near-infrared (NIR spectroscopy by focusing our attention on the fermentation process for TCP. The absorbance optimization (AO partial least squares (PLS was proposed and integrated with the moving window (MW PLS, which is called AO–MW–PLS method, to select appropriate wavebands. A model set that includes various wavebands that were equivalent to the optimal AO–MW–PLS waveband was proposed based on statistical considerations. The public region of all equivalent wavebands was just one of the equivalent wavebands. The obtained public regions were 1540–1868nm for TCP and 1114–1310nm for Tris. The root-mean-square error and correlation coefficient for leave-one-out cross validation were 0.046mg mL−1 and 0.9998mg mL−1 for TCP, and 0.235mg mL−1 and 0.9986mg mL−1 for Tris, respectively. All the models achieved highly accurate prediction effects, and the selected wavebands provided valuable references for designing specialized spectrometers. This study provided a valuable reference for further application of the proposed methods to TCP fermentation broth and to other spectroscopic analysis fields.

  16. Phosphate fertilizers with varying water-solubility applied to Amazonian soils: II. Soil P extraction methods

    International Nuclear Information System (INIS)

    Muraoka, T.; Brasil, E.C.; Scivittaro, W.B.

    2002-01-01

    A pot experiment was carried out under greenhouse conditions at the Centro de Energia Nuclear na Agricultura, Piracicaba (SP, Brazil), to evaluate the phosphorus availability of different phosphate sources in five Amazonian soils. The soils utilized were: medium texture Yellow Latosol, clayey Yellow Latosol, very clayey Yellow Latosol, clayey Red-Yellow Podzolic and very clayey Red-Yellow Podzolic. Four phosphate sources were applied: triple superphosphate, ordinary Yoorin thermophosphate, coarse Yoorin termo-phosphate and North Carolina phosphate rock at P rates of 0, 40, 80 and 120 mg kg -1 soil. The dry matter yield and the amount of P taken up by cowpea and rice were correlated with the extractable P by anionic exchangeable resin, Mehlich-1, Mehlich-3 and Bray-I. The results showed that the extractable P by Mehlich-1 was higher in the soils amended with North Carolina rock phosphate. Irrespective of the phosphorus sources used, the Mehlich-3 extractant showed close correlation with plant response. The Mehlich-3 and Bray-I extractants were more sensitive to soil variations. The Mehlich-3 extractant was more suitable in predicting the P availability to plants in the different soils and phosphorus sources studied. (author)

  17. Error-free pathology: applying lean production methods to anatomic pathology.

    Science.gov (United States)

    Condel, Jennifer L; Sharbaugh, David T; Raab, Stephen S

    2004-12-01

    The current state of our health care system calls for dramatic changes. In their pathology department, the authors believe these changes may be accomplished by accepting the long-term commitment of applying a lean production system. The ideal state of zero pathology errors is one that should be pursued by consistently asking, "Why can't we?" The philosophy of lean production systems began in the manufacturing industry: "All we are doing is looking at the time from the moment the customer gives us an order to the point when we collect the cash. And we are reducing that time line by removing non-value added wastes". The ultimate goals in pathology and overall health care are not so different. The authors' intention is to provide the patient (customer) with the most accurate diagnostic information in a timely and efficient manner. Their lead histotechnologist recently summarized this philosophy: she indicated that she felt she could sleep better at night knowing she truly did the best job she could. Her chances of making an error (in cutting or labeling) were dramatically decreased in the one-by-one continuous flow work process compared with previous practices. By designing a system that enables employees to be successful in meeting customer demand, and by empowering the frontline staff in the development and problem solving processes, one can meet the challenges of eliminating waste and build an improved, efficient system.

  18. A Quality Function Deployment Method Applied to Highly Reusable Space Transportation

    Science.gov (United States)

    Zapata, Edgar

    2016-01-01

    This paper will describe a Quality Function Deployment (QFD) currently in work the goal of which is to add definition and insight to the development of long term Highly Reusable Space Transportation (HRST). The objective here is twofold. First, to describe the process, the actual QFD experience as applies to the HRST study. Second, to describe the preliminary results of this process, in particular the assessment of possible directions for future pursuit such as promising candidate technologies or approaches that may finally open the space frontier. The iterative and synergistic nature of QFD provides opportunities in the process for the discovery of what is key in so far as it is useful, what is not, and what is merely true. Key observations on the QFD process will be presented. The importance of a customer definition as well as the similarity of the process of developing a technology portfolio to product development will be shown. Also, the relation of identified cost and operating drivers to future space vehicle designs that are robust to an uncertain future will be discussed. The results in particular of this HRST evaluation will be preliminary given the somewhat long term (or perhaps not?) nature of the task being considered.

  19. Non invasive methods for genetic analysis applied to ecological and behavioral studies in Latino-America

    Directory of Open Access Journals (Sweden)

    Susana González

    2007-07-01

    Full Text Available Documenting the presence and abundance of the neotropical mammals is the first step for understanding their population ecology, behavior and genetic dynamics in designing conservation plans. The combination of field research with molecular genetics techniques are new tools that provide valuable biological information avoiding the disturbance in the ecosystems, trying to minimize the human impact in the process to gather biological information. The objective of this paper is to review the available non invasive sampling techniques that have been used in Neotropical mammal studies to apply to determine the presence and abundance, population structure, sex ratio, taxonomic diagnostic using mitochondrial markers, and assessing genetic variability using nuclear markers. There are a wide range of non invasive sampling techniques used to determine the species identification that inhabit an area such as searching for tracks, feces, and carcasses. Other useful equipment is the camera traps that can generate an image bank that can be valuable to assess species presence and abundance by morphology. With recent advances in molecular biology, it is now possible to use the trace amounts of DNA in feces and amplify it to analyze the species diversity in an area, and the genetic variability at intraspecific level. This is particularly helpful in cases of sympatric and cryptic species in which morphology failed to diagnose the taxonomic status of several species of brocket deer of the genus Mazama.

  20. Ultrasound method applied to characterize healthy femoral diaphysis of Wistar rats in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Fontes-Pereira, A.; Matusin, D.P.; Rosa, P. [Universidade Federal do Rio de Janeiro, Programa de Engenharia Biomédica, Rio de Janeiro, RJ, Brasil, Programa de Engenharia Biomédica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Schanaider, A. [Universidade Federal do Rio de Janeiro, Escola de Medicina, Departamento de Cirurgia, Rio de Janeiro, RJ, Brasil, Departamento de Cirurgia, Escola de Medicina, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Krüger, M.A. von; Pereira, W.C.A. [Universidade Federal do Rio de Janeiro, Programa de Engenharia Biomédica, Rio de Janeiro, RJ, Brasil, Programa de Engenharia Biomédica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil)

    2014-04-04

    A simple experimental protocol applying a quantitative ultrasound (QUS) pulse-echo technique was used to measure the acoustic parameters of healthy femoral diaphyses of Wistar rats in vivo. Five quantitative parameters [apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), time slope of apparent backscatter (TSAB), integrated reflection coefficient (IRC), and frequency slope of integrated reflection (FSIR)] were calculated using the echoes from cortical and trabecular bone in the femurs of 14 Wistar rats. Signal acquisition was performed three times in each rat, with the ultrasound signal acquired along the femur's central region from three positions 1 mm apart from each other. The parameters estimated for the three positions were averaged to represent the femur diaphysis. The results showed that AIB, FSAB, TSAB, and IRC values were statistically similar, but the FSIR values from Experiments 1 and 3 were different. Furthermore, Pearson's correlation coefficient showed, in general, strong correlations among the parameters. The proposed protocol and calculated parameters demonstrated the potential to characterize the femur diaphysis of rats in vivo. The results are relevant because rats have a bone structure very similar to humans, and thus are an important step toward preclinical trials and subsequent application of QUS in humans.