WorldWideScience

Sample records for density analysis applied

  1. High-density EEG coherence analysis using functional units applied to mental fatigue

    NARCIS (Netherlands)

    Caat, Michael ten; Lorist, Monicque M.; Bezdan, Eniko; Roerdink, Jos B.T.M.; Maurits, Natasha M.

    2008-01-01

    Electroencephalography (EEG) coherence provides a quantitative measure of functional brain connectivity which is calculated between pairs of signals as a function of frequency. Without hypotheses, traditional coherence analysis would be cumbersome for high-density EEG which employs a large number of

  2. Analysis of multi-layered films. [determining dye densities by applying a regression analysis to the spectral response of the composite transparency

    Science.gov (United States)

    Scarpace, F. L.; Voss, A. W.

    1973-01-01

    Dye densities of multi-layered films are determined by applying a regression analysis to the spectral response of the composite transparency. The amount of dye in each layer is determined by fitting the sum of the individual dye layer densities to the measured dye densities. From this, dye content constants are calculated. Methods of calculating equivalent exposures are discussed. Equivalent exposures are a constant amount of energy over a limited band-width that will give the same dye content constants as the real incident energy. Methods of using these equivalent exposures for analysis of photographic data are presented.

  3. Handbook of Applied Analysis

    CERN Document Server

    Papageorgiou, Nikolaos S

    2009-01-01

    Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.

  4. Applied longitudinal analysis

    CERN Document Server

    Fitzmaurice, Garrett M; Ware, James H

    2012-01-01

    Praise for the First Edition "". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis.""-Journal of the American Statistical Association   Features newly developed topics and applications of the analysis of longitudinal data Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of lo

  5. Applied Behavior Analysis

    Science.gov (United States)

    Szapacs, Cindy

    2006-01-01

    Teaching strategies that work for typically developing children often do not work for those diagnosed with an autism spectrum disorder. However, teaching strategies that work for children with autism do work for typically developing children. In this article, the author explains how the principles and concepts of Applied Behavior Analysis can be…

  6. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  7. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  8. Applied functional analysis

    CERN Document Server

    Oden, J Tinsley

    2010-01-01

    The textbook is designed to drive a crash course for beginning graduate students majoring in something besides mathematics, introducing mathematical foundations that lead to classical results in functional analysis. More specifically, Oden and Demkowicz want to prepare students to learn the variational theory of partial differential equations, distributions, and Sobolev spaces and numerical analysis with an emphasis on finite element methods. The 1996 first edition has been used in a rather intensive two-semester course. -Book News, June 2010

  9. Applied functional analysis

    CERN Document Server

    Griffel, DH

    2002-01-01

    A stimulating introductory text, this volume examines many important applications of functional analysis to mechanics, fluid mechanics, diffusive growth, and approximation. Detailed enough to impart a thorough understanding, the text is also sufficiently straightforward for those unfamiliar with abstract analysis. Its four-part treatment begins with distribution theory and discussions of Green's functions. Essentially independent of the preceding material, the second and third parts deal with Banach spaces, Hilbert space, spectral theory, and variational techniques. The final part outlines the

  10. The relationship between local liquid density and force applied on a tip of atomic force microscope: a theoretical analysis for simple liquids.

    Science.gov (United States)

    Amano, Ken-ichi; Suzuki, Kazuhiro; Fukuma, Takeshi; Takahashi, Ohgi; Onishi, Hiroshi

    2013-12-14

    The density of a liquid is not uniform when placed on a solid. The structured liquid pushes or pulls a probe employed in atomic force microscopy, as demonstrated in a number of experimental studies. In the present study, the relation between the force on a probe and the local density of a liquid is derived based on the statistical mechanics of simple liquids. When the probe is identical to a solvent molecule, the strength of the force is shown to be proportional to the vertical gradient of ln(ρDS) with the local liquid's density on a solid surface being ρDS. The intrinsic liquid's density on a solid is numerically calculated and compared with the density reconstructed from the force on a probe that is identical or not identical to the solvent molecule.

  11. The relationship between local liquid density and force applied on a tip of atomic force microscope: A theoretical analysis for simple liquids

    Energy Technology Data Exchange (ETDEWEB)

    Amano, Ken-ichi, E-mail: aman@tohoku-pharm.ac.jp; Takahashi, Ohgi [Faculty of Pharmaceutical Sciences, Tohoku Pharmaceutical University, 4-4-1 Komatsushima, Aoba-ku, Sendai 981-8558 (Japan); Suzuki, Kazuhiro [Department of Electronic Science and Engineering, Kyoto University, Katsura, Nishikyo, Kyoto 615-8510 (Japan); Fukuma, Takeshi [Bio-AFM Frontier Research Center, Kanazawa University, Kakuma-machi, Kanazawa 920-1192 (Japan); Onishi, Hiroshi [Department of Chemistry, Faculty of Science, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2013-12-14

    The density of a liquid is not uniform when placed on a solid. The structured liquid pushes or pulls a probe employed in atomic force microscopy, as demonstrated in a number of experimental studies. In the present study, the relation between the force on a probe and the local density of a liquid is derived based on the statistical mechanics of simple liquids. When the probe is identical to a solvent molecule, the strength of the force is shown to be proportional to the vertical gradient of ln(ρ{sub DS}) with the local liquid's density on a solid surface being ρ{sub DS}. The intrinsic liquid's density on a solid is numerically calculated and compared with the density reconstructed from the force on a probe that is identical or not identical to the solvent molecule.

  12. The relationship between local liquid density and force applied on a tip of atomic force microscope: A theoretical analysis for simple liquids

    International Nuclear Information System (INIS)

    Amano, Ken-ichi; Takahashi, Ohgi; Suzuki, Kazuhiro; Fukuma, Takeshi; Onishi, Hiroshi

    2013-01-01

    The density of a liquid is not uniform when placed on a solid. The structured liquid pushes or pulls a probe employed in atomic force microscopy, as demonstrated in a number of experimental studies. In the present study, the relation between the force on a probe and the local density of a liquid is derived based on the statistical mechanics of simple liquids. When the probe is identical to a solvent molecule, the strength of the force is shown to be proportional to the vertical gradient of ln(ρ DS ) with the local liquid's density on a solid surface being ρ DS . The intrinsic liquid's density on a solid is numerically calculated and compared with the density reconstructed from the force on a probe that is identical or not identical to the solvent molecule

  13. Isopiestic density law of actinide nitrates applied to criticality calculations

    International Nuclear Information System (INIS)

    Leclaire, Nicolas; Anno, Jacques; Courtois, Gerard; Poullot, Gilles; Rouyer, Veronique

    2003-01-01

    Up to now, criticality safety experts used density laws fitted on experimental data and applied them in and outside the measurement range. Depending on the case, such an approach could be wrong for nitrate solutions. Seven components are concerned: UO 2 (NO 3 ) 2 , U(NO 3 ) 4 , Pu(NO 3 ) 4 , Pu(NO 3 ) 3 , Th(NO 3 ) 4 , Am(NO 3 ) 3 and HNO 3 . To get rid of this problem, a new methodology based on the thermodynamic concept of binary electrolytes solutions mixtures at constant water activity, so called 'isopiestic' solutions, has been developed by IRSN to calculate the nitrate solutions density. This article shortly presents the theoretical aspects of the method, its qualification using benchmarks and its implementation in IRSN graphical user interface. (author)

  14. Conversation Analysis in Applied Linguistics

    DEFF Research Database (Denmark)

    Kasper, Gabriele; Wagner, Johannes

    2014-01-01

    on applied CA, the application of basic CA's principles, methods, and findings to the study of social domains and practices that are interactionally constituted. We consider three strands—foundational, social problem oriented, and institutional applied CA—before turning to recent developments in CA research...... on learning and development. In conclusion, we address some emerging themes in the relationship of CA and applied linguistics, including the role of multilingualism, standard social science methods as research objects, CA's potential for direct social intervention, and increasing efforts to complement CA......For the last decade, conversation analysis (CA) has increasingly contributed to several established fields in applied linguistics. In this article, we will discuss its methodological contributions. The article distinguishes between basic and applied CA. Basic CA is a sociological endeavor concerned...

  15. Modern charge-density analysis

    CERN Document Server

    Gatti, Carlo

    2012-01-01

    Focusing on developments from the past 10-15 years, this volume presents an objective overview of the research in charge density analysis. The most promising methodologies are included, in addition to powerful interpretative tools and a survey of important areas of research.

  16. Applied analysis and differential equations

    CERN Document Server

    Cârj, Ovidiu

    2007-01-01

    This volume contains refereed research articles written by experts in the field of applied analysis, differential equations and related topics. Well-known leading mathematicians worldwide and prominent young scientists cover a diverse range of topics, including the most exciting recent developments. A broad range of topics of recent interest are treated: existence, uniqueness, viability, asymptotic stability, viscosity solutions, controllability and numerical analysis for ODE, PDE and stochastic equations. The scope of the book is wide, ranging from pure mathematics to various applied fields such as classical mechanics, biomedicine, and population dynamics.

  17. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  18. Modern problems in applied analysis

    CERN Document Server

    Rogosin, Sergei

    2018-01-01

    This book features a collection of recent findings in Applied Real and Complex Analysis that were presented at the 3rd International Conference “Boundary Value Problems, Functional Equations and Applications” (BAF-3), held in Rzeszow, Poland on 20-23 April 2016. The contributions presented here develop a technique related to the scope of the workshop and touching on the fields of differential and functional equations, complex and real analysis, with a special emphasis on topics related to boundary value problems. Further, the papers discuss various applications of the technique, mainly in solid mechanics (crack propagation, conductivity of composite materials), biomechanics (viscoelastic behavior of the periodontal ligament, modeling of swarms) and fluid dynamics (Stokes and Brinkman type flows, Hele-Shaw type flows). The book is addressed to all readers who are interested in the development and application of innovative research results that can help solve theoretical and real-world problems.

  19. Dietary energy density: Applying behavioural science to weight management.

    Science.gov (United States)

    Rolls, B J

    2017-09-01

    Studies conducted by behavioural scientists show that energy density (kcal/g) provides effective guidance for healthy food choices to control intake and promote satiety. Energy density depends upon a number of dietary components, especially water (0 kcal/g) and fat (9 kcal/g). Increasing the proportion of water or water-rich ingredients, such as vegetables or fruit, lowers a food's energy density. A number of studies show that when the energy density of the diet is reduced, both adults and children spontaneously decrease their ad libitum energy intake. Other studies show that consuming a large volume of a low-energy-dense food such as soup, salad, or fruit as a first course preload can enhance satiety and reduce overall energy intake at a meal. Current evidence suggests that energy density influences intake through a complex interplay of cognitive, sensory, gastrointestinal, hormonal and neural influences. Other studies that focus on practical applications show how the strategic incorporation of foods lower in energy density into the diet allows people to eat satisfying portions while improving dietary patterns. This review discusses studies that have led to greater understanding of the importance of energy density for food intake regulation and weight management.

  20. Conversation Analysis and Applied Linguistics.

    Science.gov (United States)

    Schegloff, Emanuel A.; Koshik, Irene; Jacoby, Sally; Olsher, David

    2002-01-01

    Offers biographical guidance on several major areas of conversation-analytic work--turn-taking, repair, and word selection--and indicates past or potential points of contact with applied linguistics. Also discusses areas of applied linguistic work. (Author/VWL)

  1. How Thin Is Foil? Applying Density to Find the Thickness of Aluminum Foil

    Science.gov (United States)

    Concannon, James P.

    2011-01-01

    In this activity, I show how high school students apply their knowledge of density to solve an unknown variable, such as thickness. Students leave this activity with a better understanding of density, the knowledge that density is a characteristic property of a given substance, and the ways density can be measured. (Contains 4 figures and 1 table.)

  2. Applying critical analysis - main methods

    Directory of Open Access Journals (Sweden)

    Miguel Araujo Alonso

    2012-02-01

    Full Text Available What is the usefulness of critical appraisal of literature? Critical analysis is a fundamental condition for the correct interpretation of any study that is subject to review. In epidemiology, in order to learn how to read a publication, we must be able to analyze it critically. Critical analysis allows us to check whether a study fulfills certain previously established methodological inclusion and exclusion criteria. This is frequently used in conducting systematic reviews although eligibility criteria are generally limited to the study design. Critical analysis of literature and be done implicitly while reading an article, as in reading for personal interest, or can be conducted in a structured manner, using explicit and previously established criteria. The latter is done when formally reviewing a topic.

  3. Temperature-dependent spectral density analysis applied to monitoring backbone dynamics of major urinary protein-I complexed with the pheromone 2-sec-butyl-4,5-dihydrothiazole

    International Nuclear Information System (INIS)

    Krizova, Hana; Zidek, Lukas; Stone, Martin J.; Novotny, Milos V.; Sklenar, Vladimir

    2004-01-01

    Backbone dynamics of mouse major urinary protein I (MUP-I) was studied by 15 N NMR relaxation. Data were collected at multiple temperatures for a complex of MUP-I with its natural pheromonal ligand, 2-sec-4,5-dihydrothiazole, and for the free protein. The measured relaxation rates were analyzed using the reduced spectral density mapping. Graphical analysis of the spectral density values provided an unbiased qualitative picture of the internal motions. Varying temperature greatly increased the range of analyzed spectral density values and therefore improved reliability of the analysis. Quantitative parameters describing the dynamics on picosecond to nanosecond time scale were obtained using a novel method of simultaneous data fitting at multiple temperatures. Both methods showed that the backbone flexibility on the fast time scale is slightly increased upon pheromone binding, in accordance with the previously reported results. Zero-frequency spectral density values revealed conformational changes on the microsecond to millisecond time scale. Measurements at different temperatures allowed to monitor temperature depencence of the motional parameters

  4. Essentials of applied dynamic analysis

    CERN Document Server

    Jia, Junbo

    2014-01-01

    This book presents up-to-date knowledge of dynamic analysis in engineering world. To facilitate the understanding of the topics by readers with various backgrounds, general principles are linked to their applications from different angles. Special interesting topics such as statistics of motions and loading, damping modeling and measurement, nonlinear dynamics, fatigue assessment, vibration and buckling under axial loading, structural health monitoring, human body vibrations, and vehicle-structure interactions etc., are also presented. The target readers include industry professionals in civil, marine and mechanical engineering, as well as researchers and students in this area.

  5. Applied systems analysis. No. 22

    International Nuclear Information System (INIS)

    1980-12-01

    Based on a detailed analysis of demands in the area Cologne/Frankfurt, the amount of the system products for this region were ascertained, which under consideration of technical conditions and entrepreneurial aspects seemed to be disposable at cost equality with competative energy supplies. Based on these data, the technical components of the system, location and piping were fixed and first- and operating costs were determined. For a judgement of the economics, the key numbers, cash value, internal rate of interest and cost recovery rate were determined from the difference of costs between the nuclear long distance energy system and alternative facilities. Furthermore specific production cost, associated prices and contribution margin were presented for each product. (orig.) [de

  6. Applied Behavior Analysis and Statistical Process Control?

    Science.gov (United States)

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  7. Central depression in nucleonic densities: Trend analysis in the nuclear density functional theory approach

    Science.gov (United States)

    Schuetrumpf, B.; Nazarewicz, W.; Reinhard, P.-G.

    2017-08-01

    Background: The central depression of nucleonic density, i.e., a reduction of density in the nuclear interior, has been attributed to many factors. For instance, bubble structures in superheavy nuclei are believed to be due to the electrostatic repulsion. In light nuclei, the mechanism behind the density reduction in the interior has been discussed in terms of shell effects associated with occupations of s orbits. Purpose: The main objective of this work is to reveal mechanisms behind the formation of central depression in nucleonic densities in light and heavy nuclei. To this end, we introduce several measures of the internal nucleonic density. Through the statistical analysis, we study the information content of these measures with respect to nuclear matter properties. Method: We apply nuclear density functional theory with Skyrme functionals. Using the statistical tools of linear least square regression, we inspect correlations between various measures of central depression and model parameters, including nuclear matter properties. We study bivariate correlations with selected quantities as well as multiple correlations with groups of parameters. Detailed correlation analysis is carried out for 34Si for which a bubble structure has been reported recently, 48Ca, and N =82 , 126, and 184 isotonic chains. Results: We show that the central depression in medium-mass nuclei is very sensitive to shell effects, whereas for superheavy systems it is firmly driven by the electrostatic repulsion. An appreciable semibubble structure in proton density is predicted for 294Og, which is currently the heaviest nucleus known experimentally. Conclusion: Our correlation analysis reveals that the central density indicators in nuclei below 208Pb carry little information on parameters of nuclear matter; they are predominantly driven by shell structure. On the other hand, in the superheavy nuclei there exists a clear relationship between the central nucleonic density and symmetry energy.

  8. Concept analysis of culture applied to nursing.

    Science.gov (United States)

    Marzilli, Colleen

    2014-01-01

    Culture is an important concept, especially when applied to nursing. A concept analysis of culture is essential to understanding the meaning of the word. This article applies Rodgers' (2000) concept analysis template and provides a definition of the word culture as it applies to nursing practice. This article supplies examples of the concept of culture to aid the reader in understanding its application to nursing and includes a case study demonstrating components of culture that must be respected and included when providing health care.

  9. Caldwell University's Department of Applied Behavior Analysis.

    Science.gov (United States)

    Reeve, Kenneth F; Reeve, Sharon A

    2016-05-01

    Since 2004, faculty members at Caldwell University have developed three successful graduate programs in Applied Behavior Analysis (i.e., PhD, MA, non-degree programs), increased program faculty from two to six members, developed and operated an on-campus autism center, and begun a stand-alone Applied Behavior Analysis Department. This paper outlines a number of strategies used to advance these initiatives, including those associated with an extensive public relations campaign. We also outline challenges that have limited our programs' growth. These strategies, along with a consideration of potential challenges, might prove useful in guiding academicians who are interested in starting their own programs in behavior analysis.

  10. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  11. Building an applied activation analysis centre

    International Nuclear Information System (INIS)

    Bartosek, J.; Kasparec, I.; Masek, J.

    1972-01-01

    Requirements are defined and all available background material is reported and discussed for the building up of a centre of applied activation analysis in Czechoslovakia. A detailed analysis of potential users and the centre's envisaged availability is also presented as part of the submitted study. A brief economic analysis is annexed. The study covers the situation up to the end of 1972. (J.K.)

  12. Lessons learned in applying function analysis

    International Nuclear Information System (INIS)

    Mitchel, G.R.; Davey, E.; Basso, R.

    2001-01-01

    This paper summarizes the lessons learned in undertaking and applying function analysis based on the recent experience of utility, AECL and international design and assessment projects. Function analysis is an analytical technique that can be used to characterize and asses the functions of a system and is widely recognized as an essential component of a 'systematic' approach to design, on that integrated operational and user requirements into the standard design process. (author)

  13. Positive Behavior Support and Applied Behavior Analysis

    Science.gov (United States)

    Johnston, J. M.; Foxx, R. M.; Jacobson, J. W.; Green, G.; Mulick, J. A.

    2006-01-01

    This article reviews the origins and characteristics of the positive behavior support (PBS) movement and examines those features in the context of the field of applied behavior analysis (ABA). We raise a number of concerns about PBS as an approach to delivery of behavioral services and its impact on how ABA is viewed by those in human services. We…

  14. Applied Behavior Analysis: Beyond Discrete Trial Teaching

    Science.gov (United States)

    Steege, Mark W.; Mace, F. Charles; Perry, Lora; Longenecker, Harold

    2007-01-01

    We discuss the problem of autism-specific special education programs representing themselves as Applied Behavior Analysis (ABA) programs when the only ABA intervention employed is Discrete Trial Teaching (DTT), and often for limited portions of the school day. Although DTT has many advantages to recommend its use, it is not well suited to teach…

  15. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NARCIS (Netherlands)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-01-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a

  16. Applied decision analysis and risk evaluation

    International Nuclear Information System (INIS)

    Ferse, W.; Kruber, S.

    1995-01-01

    During 1994 the workgroup 'Applied Decision Analysis and Risk Evaluation; continued the work on the knowledge based decision support system XUMA-GEFA for the evaluation of the hazard potential of contaminated sites. Additionally a new research direction was started which aims at the support of a later stage of the treatment of contaminated sites: The clean-up decision. For the support of decisions arising at this stage, the methods of decision analysis will be used. Computational aids for evaluation and decision support were implemented and a case study at a waste disposal site in Saxony which turns out to be a danger for the surrounding groundwater ressource was initiated. (orig.)

  17. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  18. Strategic decision analysis applied to borehole seismology

    International Nuclear Information System (INIS)

    Menke, M.M.; Paulsson, B.N.P.

    1994-01-01

    Strategic Decision Analysis (SDA) is the evolving body of knowledge on how to achieve high quality in the decision that shapes an organization's future. SDA comprises philosophy, process concepts, methodology, and tools for making good decisions. It specifically incorporates many concepts and tools from economic evaluation and risk analysis. Chevron Petroleum Technology Company (CPTC) has applied SDA to evaluate and prioritize a number of its most important and most uncertain R and D projects, including borehole seismology. Before SDA, there were significant issues and concerns about the value to CPTC of continuing to work on borehole seismology. The SDA process created a cross-functional team of experts to structure and evaluate this project. A credible economic model was developed, discrete risks and continuous uncertainties were assessed, and an extensive sensitivity analysis was performed. The results, even applied to a very restricted drilling program for a few years, were good enough to demonstrate the value of continuing the project. This paper explains the SDA philosophy concepts, and process and demonstrates the methodology and tools using the borehole seismology project example. SDA is useful in the upstream industry not just in the R and D/technology decisions, but also in major exploration and production decisions. Since a major challenge for upstream companies today is to create and realize value, the SDA approach should have a very broad applicability

  19. Density functional theory and Raman spectroscopy applied to structure and vibrational mode analysis of 1,1',3,3'-tetraethyl-5,5',6,6'-tetrachloro- benzimidazolocarbocyanine iodide and its aggregate.

    Science.gov (United States)

    Aydin, Metin; Dede, Özge; Akins, Daniel L

    2011-02-14

    We have measured electronic and Raman scattering spectra of 1,1',3,3'-tetraethyl-5,5',6,6'-tetrachloro-benzimidazolocarbocyanine iodide (TTBC) in various environments, and we have calculated the ground state geometric and spectroscopic properties of the TTBC cation in the gas and solution phases (e.g., bond distances, bond angles, charge distributions, and Raman vibrational frequencies) using density functional theory. Our structure calculations have shown that the ground state equilibrium structure of a cis-conformer lies ∼200 cm(-1) above that of a trans-conformer and both conformers have C(2) symmetry. Calculated electronic transitions indicate that the difference between the first transitions of the two conformers is about 130 cm(-1). Raman spectral assignments of monomeric- and aggregated-TTBC cations have been aided by density functional calculations at the same level of the theory. Vibrational mode analyses of the calculated Raman spectra reveal that the observed Raman bands above 700 cm(-1) are mainly associated with the in-plane deformation of the benzimidazolo moieties, while bands below 700 cm(-1) are associated with out-of-plane deformations of the benzimidazolo moieties. We have also found that for the nonresonance excited experimental Raman spectrum of aggregated-TTBC cation, the Raman bands in the higher-frequency region are enhanced compared with those in the nonresonance spectrum of the monomeric cation. For the experimental Raman spectrum of the aggregate under resonance excitation, however, we find new Raman features below 600 cm(-1), in addition to a significantly enhanced Raman peak at 671 cm(-1) that are associated with out-of-plane distortions. Also, time-dependent density functional theory calculations suggest that the experimentally observed electronic transition at ∼515 nm (i.e., 2.41 eV) in the absorption spectrum of the monomeric-TTBC cation predominantly results from the π → π∗ transition. Calculations are further interpreted

  20. Functional Data Analysis Applied in Chemometrics

    DEFF Research Database (Denmark)

    Muller, Martha

    nutritional status and metabolic phenotype. We want to understand how metabolomic spectra can be analysed using functional data analysis to detect the in uence of dierent factors on specic metabolites. These factors can include, for example, gender, diet culture or dietary intervention. In Paper I we apply...... representation of each spectrum. Subset selection of wavelet coecients generates the input to mixed models. Mixed-model methodology enables us to take the study design into account while modelling covariates. Bootstrap-based inference preserves the correlation structure between curves and enables the estimation...

  1. Effect of Applied Current Density on Cavitation-Erosion Characteristics for Anodized Al Alloy.

    Science.gov (United States)

    Lee, Seung-Jun; Kim, Seong-Jong

    2018-02-01

    Surface finishing is as important as selection of material to achieve durability. Surface finishing is a process to provide surface with the desired performance and features by applying external forces such as thermal energy or stress. This study investigated the optimum supply current density for preventing from cavitation damages by applying to an anodizing technique that artificially forms on the surface an oxide coating that has excellent mechanical characteristics, such as hardness, wear resistance. Result of hardness test, the greater hardness was associated with greater brittleness, resulting in deleterious characteristics. Consequently, under conditions such as the electrolyte concentration of 10 vol.%, the processing time of 40 min, the electrolyte temperature of 10 °C, and the current density of 20 mA/cm2 were considered to be the optimum anodizing conditions for improvement of durability in seawater.

  2. Computerized image analysis: estimation of breast density on mammograms

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    2000-06-01

    An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

  3. Applied spectrophotometry: analysis of a biochemical mixture.

    Science.gov (United States)

    Trumbo, Toni A; Schultz, Emeric; Borland, Michael G; Pugh, Michael Eugene

    2013-01-01

    Spectrophotometric analysis is essential for determining biomolecule concentration of a solution and is employed ubiquitously in biochemistry and molecular biology. The application of the Beer-Lambert-Bouguer Lawis routinely used to determine the concentration of DNA, RNA or protein. There is however a significant difference in determining the concentration of a given species (RNA, DNA, protein) in isolation (a contrived circumstance) as opposed to determining that concentration in the presence of other species (a more realistic situation). To present the student with a more realistic laboratory experience and also to fill a hole that we believe exists in student experience prior to reaching a biochemistry course, we have devised a three week laboratory experience designed so that students learn to: connect laboratory practice with theory, apply the Beer-Lambert-Bougert Law to biochemical analyses, demonstrate the utility and limitations of example quantitative colorimetric assays, demonstrate the utility and limitations of UV analyses for biomolecules, develop strategies for analysis of a solution of unknown biomolecular composition, use digital micropipettors to make accurate and precise measurements, and apply graphing software. Copyright © 2013 Wiley Periodicals, Inc.

  4. Tissue Microarray Analysis Applied to Bone Diagenesis.

    Science.gov (United States)

    Mello, Rafael Barrios; Silva, Maria Regina Regis; Alves, Maria Teresa Seixas; Evison, Martin Paul; Guimarães, Marco Aurelio; Francisco, Rafaella Arrabaca; Astolphi, Rafael Dias; Iwamura, Edna Sadayo Miazato

    2017-01-04

    Taphonomic processes affecting bone post mortem are important in forensic, archaeological and palaeontological investigations. In this study, the application of tissue microarray (TMA) analysis to a sample of femoral bone specimens from 20 exhumed individuals of known period of burial and age at death is described. TMA allows multiplexing of subsamples, permitting standardized comparative analysis of adjacent sections in 3-D and of representative cross-sections of a large number of specimens. Standard hematoxylin and eosin, periodic acid-Schiff and silver methenamine, and picrosirius red staining, and CD31 and CD34 immunohistochemistry were applied to TMA sections. Osteocyte and osteocyte lacuna counts, percent bone matrix loss, and fungal spheroid element counts could be measured and collagen fibre bundles observed in all specimens. Decalcification with 7% nitric acid proceeded more rapidly than with 0.5 M EDTA and may offer better preservation of histological and cellular structure. No endothelial cells could be detected using CD31 and CD34 immunohistochemistry. Correlation between osteocytes per lacuna and age at death may reflect reported age-related responses to microdamage. Methodological limitations and caveats, and results of the TMA analysis of post mortem diagenesis in bone are discussed, and implications for DNA survival and recovery considered.

  5. Applied linear algebra and matrix analysis

    CERN Document Server

    Shores, Thomas S

    2018-01-01

    In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Its blend of theory, computational exercises, and analytical writing projects is designed to highlight the interplay between these aspects of an application. This approach places special emphasis on linear algebra as an experimental science that provides tools for solving concrete problems. The second edition’s revised text discusses applications of linear algebra like graph theory and network modeling methods used in Google’s PageRank algorithm. Other new materials include modeling examples of diffusive processes, linear programming, image processing, digital signal processing, and Fourier analysis. These topics are woven into the core material of Gaussian elimination and other matrix operations; eigenvalues, eigenvectors, and discrete dynamical systems; and the geometrical aspects of vector spaces. Intended for a one-semester undergraduate course without a strict calculus prerequisite, Applied Linear Algebra and M...

  6. Epithermal neutron activation analysis in applied microbiology

    International Nuclear Information System (INIS)

    Marina Frontasyeva

    2012-01-01

    Some results from applying epithermal neutron activation analysis at FLNP JINR, Dubna, Russia, in medical biotechnology, environmental biotechnology and industrial biotechnology are reviewed. In the biomedical experiments biomass from the blue-green alga Spirulina platensis (S. platensis) has been used as a matrix for the development of pharmaceutical substances containing such essential trace elements as selenium, chromium and iodine. The feasibility of target-oriented introduction of these elements into S. platensis biocomplexes retaining its protein composition and natural beneficial properties was shown. The absorption of mercury on growth dynamics of S. platensis and other bacterial strains was observed. Detoxification of Cr and Hg by Arthrobacter globiformis 151B was demonstrated. Microbial synthesis of technologically important silver nanoparticles by the novel actinomycete strain Streptomyces glaucus 71 MD and blue-green alga S. platensis were characterized by a combined use of transmission electron microscopy, scanning electron microscopy and energy-dispersive analysis of X-rays. It was established that the tested actinomycete S. glaucus 71 MD produces silver nanoparticles extracellularly when acted upon by the silver nitrate solution, which offers a great advantage over an intracellular process of synthesis from the point of view of applications. The synthesis of silver nanoparticles by S. platensis proceeded differently under the short-term and long-term silver action. (author)

  7. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  8. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  9. Analysis of the interaction between experimental and applied behavior analysis.

    Science.gov (United States)

    Virues-Ortega, Javier; Hurtado-Parrado, Camilo; Cox, Alison D; Pear, Joseph J

    2014-01-01

    To study the influences between basic and applied research in behavior analysis, we analyzed the coauthorship interactions of authors who published in JABA and JEAB from 1980 to 2010. We paid particular attention to authors who published in both JABA and JEAB (dual authors) as potential agents of cross-field interactions. We present a comprehensive analysis of dual authors' coauthorship interactions using social networks methodology and key word analysis. The number of dual authors more than doubled (26 to 67) and their productivity tripled (7% to 26% of JABA and JEAB articles) between 1980 and 2010. Dual authors stood out in terms of number of collaborators, number of publications, and ability to interact with multiple groups within the field. The steady increase in JEAB and JABA interactions through coauthors and the increasing range of topics covered by dual authors provide a basis for optimism regarding the progressive integration of basic and applied behavior analysis. © Society for the Experimental Analysis of Behavior.

  10. Renormalization techniques applied to the study of density of states in disordered systems

    International Nuclear Information System (INIS)

    Ramirez Ibanez, J.

    1985-01-01

    A general scheme for real space renormalization of formal scattering theory is presented and applied to the calculation of density of states (DOS) in some finite width systems. This technique is extended in a self-consistent way, to the treatment of disordered and partially ordered chains. Numerical results of moments and DOS are presented in comparison with previous calculations. In addition, a self-consistent theory for the magnetic order problem in a Hubbard chain is derived and a parametric transition is observed. Properties of localization of the electronic states in disordered chains are studied through various decimation averaging techniques and using numerical simulations. (author) [pt

  11. Telemedicine - a scientometric and density equalizing analysis.

    Science.gov (United States)

    Groneberg, David A; Rahimian, Shaghayegh; Bundschuh, Matthias; Schwarzer, Mario; Gerber, Alexander; Kloft, Beatrix

    2015-01-01

    As a result of the various telemedicine projects in the past years a large number of studies were recently published in this field. However, a precise bibliometric analysis of telemedicine publications does not exist so far. The present study was conducted to establish a data base of the existing approaches. Density-equalizing algorithms were used and data was retrieved from the Thomson Reuters database Web of Science. During the period from 1900 to 2006 a number of 3290 filed items were connected to telemedicine, with the first being published in 1964. The studies originate from 101 countries, with the USA, Great Britain and Canada being the most productive suppliers participating in 56.08 % of all published items. Analyzing the average citation per item for countries with more than 10 publications, Ireland ranked first (10.19/item), New Zealand ranked second (9.5/item) followed by Finland (9.04/item). The citation rate can be assumed as an indicator for research quality. The ten most productive journals include three journals with the main focus telemedicine and another five with the main focus "Information/Informatics". In all subject categories examined for published items related to telemedicine, "Health Care Sciences & Services" ranked first by far. More than 36 % of all publications are assigned to this category, followed by "Medical Informatics" with 9.72 % and "Medicine, General & Internal" with 8.84 % of all publications. In summary it can be concluded that the data shows clearly a strong increase in research productivity. Using science citation analysis it can be assumed that there is a large rise in the interest in telemedicine studies.

  12. Social network analysis applied to team sports analysis

    CERN Document Server

    Clemente, Filipe Manuel; Mendes, Rui Sousa

    2016-01-01

    Explaining how graph theory and social network analysis can be applied to team sports analysis, This book presents useful approaches, models and methods that can be used to characterise the overall properties of team networks and identify the prominence of each team player. Exploring the different possible network metrics that can be utilised in sports analysis, their possible applications and variances from situation to situation, the respective chapters present an array of illustrative case studies. Identifying the general concepts of social network analysis and network centrality metrics, readers are shown how to generate a methodological protocol for data collection. As such, the book provides a valuable resource for students of the sport sciences, sports engineering, applied computation and the social sciences.

  13. Colilert® applied to food analysis

    Directory of Open Access Journals (Sweden)

    Maria José Rodrigues

    2014-06-01

    Full Text Available Colilert® (IDEXX was originally developed for the simultaneous enumeration of coliforms and E. coli in water samples and has been used for the quality control routine of drinking, swimming pools, fresh, coastal and waste waters (Grossi et al., 2013. The Colilert® culture medium contains the indicator nutrient 4-Methylumbelliferyl-β-D-Glucuronide (MUG. MUG acts as a substrate for the E. coli enzyme β-glucuronidase, from which a fluorescent compound is produced. A positive MUG result produces fluorescence when viewed under an ultraviolet lamp. If the test fluorescence is equal to or greater than that of the control, the presence of E. coli has been confirmed (Lopez-Roldan et al., 2013. The present work aimed to apply Colilert® to the enumeration of E. coli in different foods, through the comparison of results against the reference method (ISO 16649-2, 2001 for E. coli food analysis. The study was divided in two stages. During the first stage ten different types of foods were analyzed with Colilert®, these included pastry, raw meat, ready to eat meals, yogurt, raw seabream and salmon, and cooked shrimp. From these it were approved the following: pastry with custard; raw minced pork; soup "caldo-verde"; raw vegetable salad (lettuce and carrots and solid yogurt. The approved foods presented a better insertion in the tray, the colour of the wells was lighter and the UV reading was easier. In the second stage the foods were artificially contaminated with 2 log/g of E. coli (ATCC 25922 and analyzed. Colilert® proved to be an accurate method and the counts were similar to the ones obtained with the reference method. In the present study, the Colilert® method did not reveal neither false-positive or false-negative results, however sometimes the results were difficult to read due to the presence of green fluorescence in some wells. Generally Colilert® was an easy and rapid method, but less objective and more expensive than the reference method.

  14. Moving Forward: Positive Behavior Support and Applied Behavior Analysis

    Science.gov (United States)

    Tincani, Matt

    2007-01-01

    A controversy has emerged about the relationship between positive behavior support and applied behavior analysis. Some behavior analysts suggest that positive behavior support and applied behavior analysis are the same (e.g., Carr & Sidener, 2002). Others argue that positive behavior support is harmful to applied behavior analysis (e.g., Johnston,…

  15. Introduction: Conversation Analysis in Applied Linguistics

    Science.gov (United States)

    Sert, Olcay; Seedhouse, Paul

    2011-01-01

    This short, introductory paper presents an up-to-date account of works within the field of Applied Linguistics which have been influenced by a Conversation Analytic paradigm. The article reviews recent studies in classroom interaction, materials development, proficiency assessment and language teacher education. We believe that the publication of…

  16. Bouguer correction density determination from fractal analysis using ...

    African Journals Online (AJOL)

    In this work, Bouguer density is determined using the fractal approach. This technique was applied to the gravity data of the Kwello area of the Basement Complex, north-western Nigeria. The density obtained using the fractal approach is 2500 kgm which is lower than the conventional value of 2670 kgm used for average ...

  17. Density functional and neural network analysis

    DEFF Research Database (Denmark)

    Jalkanen, K. J.; Suhai, S.; Bohr, Henrik

    1997-01-01

    Density functional theory (DFT) calculations have been carried out for hydrated L-alanine, L-alanyl-L-alanine and N-acetyl L-alanine N'-methylamide and examined with respect to the effect of water on the structure, the vibrational frequencies, vibrational absorption (VA) and vibrational circular...

  18. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  19. Tissue Microarray Analysis Applied to Bone Diagenesis

    OpenAIRE

    Barrios Mello, Rafael; Regis Silva, Maria Regina; Seixas Alves, Maria Teresa; Evison, Martin; Guimarães, Marco Aurélio; Francisco, Rafaella Arrabaça; Dias Astolphi, Rafael; Miazato Iwamura, Edna Sadayo

    2017-01-01

    Taphonomic processes affecting bone post mortem are important in forensic, archaeological and palaeontological investigations. In this study, the application of tissue microarray (TMA) analysis to a sample of femoral bone specimens from 20 exhumed individuals of known period of burial and age at death is described. TMA allows multiplexing of subsamples, permitting standardized comparative analysis of adjacent sections in 3-D and of representative cross-sections of a large number of specimens....

  20. Meta-analysis in applied ecology.

    Science.gov (United States)

    Stewart, Gavin

    2010-02-23

    This overview examines research synthesis in applied ecology and conservation. Vote counting and pooling unweighted averages are widespread despite the superiority of syntheses based on weighted combination of effects. Such analyses allow exploration of methodological uncertainty in addition to consistency of effects across species, space and time, but exploring heterogeneity remains controversial. Meta-analyses are required to generalize in ecology, and to inform evidence-based decision-making, but the more sophisticated statistical techniques and registers of research used in other disciplines must be employed in ecology to fully realize their benefits.

  1. Applied surface analysis of metal materials

    International Nuclear Information System (INIS)

    Weiss, Z.

    1987-01-01

    The applications of surface analytical techniques in the solution of technological problems in metalurgy and engineering are reviewed. Some important application areas such as corrosion, grain boundary segregation and metallurgical coatings are presented together with specific requirements for the type of information which is necessary for solving particular problems. The techniques discussed include: electron spectroscopies (Auger Electron Spectroscopy, Electron Spectroscopy for Chemical Analysis), ion spectroscopies (Secondary Ion Mass Spectrometry, Ion Scattering Spectroscopy), Rutherford Back-Scattering, nuclear reaction analysis, optical methods (Glow Discharge Optical Emission Spectrometry), ellipsometry, infrared and Raman spectroscopy, the Moessbauer spectroscopy and methods of consumptive depth profile analysis. Principles and analytical features of these methods are demonstrated and examples of their applications to metallurgy are taken from recent literature. (author). 4 figs., 2 tabs., 112 refs

  2. Applied modal analysis of wind turbine blades

    DEFF Research Database (Denmark)

    Pedersen, H.B.; Kristensen, O.J.D.

    2003-01-01

    In this project modal analysis has been used to determine the natural frequencies, damping and the mode shapes for wind turbine blades. Different methods to measure the position and adjust the direction of the measuring points are discussed. Differentequipment for mounting the accelerometers...... is investigated by repeated measurement on the same wind turbine blade. Furthermore the flexibility of the test set-up is investigated, by use ofaccelerometers mounted on the flexible adapter plate during the measurement campaign. One experimental campaign investigated the results obtained from a loaded...... and unloaded wind turbine blade. During this campaign the modal analysis are performed on ablade mounted in a horizontal and a vertical position respectively. Finally the results obtained from modal analysis carried out on a wind turbine blade are compared with results obtained from the Stig Øyes blade_EV1...

  3. Applied quantitative analysis in the social sciences

    CERN Document Server

    Petscher, Yaacov; Compton, Donald L

    2013-01-01

    To say that complex data analyses are ubiquitous in the education and social sciences might be an understatement. Funding agencies and peer-review journals alike require that researchers use the most appropriate models and methods for explaining phenomena. Univariate and multivariate data structures often require the application of more rigorous methods than basic correlational or analysis of variance models. Additionally, though a vast set of resources may exist on how to run analysis, difficulties may be encountered when explicit direction is not provided as to how one should run a model

  4. Applied Spectrophotometry: Analysis of a Biochemical Mixture

    Science.gov (United States)

    Trumbo, Toni A.; Schultz, Emeric; Borland, Michael G.; Pugh, Michael Eugene

    2013-01-01

    Spectrophotometric analysis is essential for determining biomolecule concentration of a solution and is employed ubiquitously in biochemistry and molecular biology. The application of the Beer-Lambert-Bouguer Lawis routinely used to determine the concentration of DNA, RNA or protein. There is however a significant difference in determining the…

  5. Thermal analysis applied to irradiated propolis

    Energy Technology Data Exchange (ETDEWEB)

    Matsuda, Andrea Harumi; Machado, Luci Brocardo; Mastro, N.L. del E-mail: nelida@usp.br

    2002-03-01

    Propolis is a resinous hive product, collected by bees. Raw propolis requires a decontamination procedure and irradiation appears as a promising technique for this purpose. The valuable properties of propolis for food and pharmaceutical industries have led to increasing interest in its technological behavior. Thermal analysis is a chemical analysis that gives information about changes on heating of great importance for technological applications. Ground propolis samples were {sup 60}Co gamma irradiated with 0 and 10 kGy. Thermogravimetry curves shown a similar multi-stage decomposition pattern for both irradiated and unirradiated samples up to 600 deg. C. Similarly, through differential scanning calorimetry , a coincidence of melting point of irradiated and unirradiated samples was found. The results suggest that the irradiation process do not interfere on the thermal properties of propolis when irradiated up to 10 kGy.

  6. Applied modal analysis of wind turbine blades

    Energy Technology Data Exchange (ETDEWEB)

    Broen Pedersen, H.; Dahl Kristensen, O.J.

    2003-02-01

    In this project modal analysis has been used to determine the natural frequencies, damping and the mode shapes for wind turbine blades. Different methods to measure the position and adjust the direction of the measuring points are discussed. Different equipment for mounting the accelerometers are investigated and the most suitable are chosen. Different excitation techniques are tried during experimental campaigns. After a discussion the pendulum hammer were chosen, and a new improved hammer was manufactured. Some measurement errors are investigated. The ability to repeat the measured results is investigated by repeated measurement on the same wind turbine blade. Furthermore the flexibility of the test set-up is investigated, by use of accelerometers mounted on the flexible adapter plate during the measurement campaign. One experimental campaign investigated the results obtained from a loaded and unloaded wind turbine blade. During this campaign the modal analysis are performed on a blade mounted in a horizontal and a vertical position respectively. Finally the results obtained from modal analysis carried out on a wind turbine blade are compared with results obtained from the Stig Oeyes blade{sub E}V1 program. (au)

  7. Reliability analysis applied to structural tests

    Science.gov (United States)

    Diamond, P.; Payne, A. O.

    1972-01-01

    The application of reliability theory to predict, from structural fatigue test data, the risk of failure of a structure under service conditions because its load-carrying capability is progressively reduced by the extension of a fatigue crack, is considered. The procedure is applicable to both safe-life and fail-safe structures and, for a prescribed safety level, it will enable an inspection procedure to be planned or, if inspection is not feasible, it will evaluate the life to replacement. The theory has been further developed to cope with the case of structures with initial cracks, such as can occur in modern high-strength materials which are susceptible to the formation of small flaws during the production process. The method has been applied to a structure of high-strength steel and the results are compared with those obtained by the current life estimation procedures. This has shown that the conventional methods can be unconservative in certain cases, depending on the characteristics of the structure and the design operating conditions. The suitability of the probabilistic approach to the interpretation of the results from full-scale fatigue testing of aircraft structures is discussed and the assumptions involved are examined.

  8. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    Science.gov (United States)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-04-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

  9. Artificial intelligence applied to process signal analysis

    Science.gov (United States)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  10. Thermal transient analysis applied to horizontal wells

    Energy Technology Data Exchange (ETDEWEB)

    Duong, A.N. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[ConocoPhillips Canada Resources Corp., Calgary, AB (Canada)

    2008-10-15

    Steam assisted gravity drainage (SAGD) is a thermal recovery process used to recover bitumen and heavy oil. This paper presented a newly developed model to estimate cooling time and formation thermal diffusivity by using a thermal transient analysis along the horizontal wellbore under a steam heating process. This radial conduction heating model provides information on the heat influx distribution along a horizontal wellbore or elongated steam chamber, and is therefore important for determining the effectiveness of the heating process in the start-up phase in SAGD. Net heat flux estimation in the target formation during start-up can be difficult to measure because of uncertainties regarding heat loss in the vertical section; steam quality along the horizontal segment; distribution of steam along the wellbore; operational conditions; and additional effects of convection heating. The newly presented model can be considered analogous to pressure transient analysis of a buildup after a constant pressure drawdown. The model is based on an assumption of an infinite-acting system. This paper also proposed a new concept of a heating ring to measure the heat storage in the heated bitumen at the time of testing. Field observations were used to demonstrate how the model can be used to save heat energy, conserve steam and enhance bitumen recovery. 18 refs., 14 figs., 2 appendices.

  11. Photometric analysis applied in determining facial type

    Directory of Open Access Journals (Sweden)

    Luciana Flaquer Martins

    2012-10-01

    Full Text Available INTRODUCTION: In orthodontics, determining the facial type is a key element in the prescription of a correct diagnosis. In the early days of our specialty, observation and measurement of craniofacial structures were done directly on the face, in photographs or plaster casts. With the development of radiographic methods, cephalometric analysis replaced the direct facial analysis. Seeking to validate the analysis of facial soft tissues, this work compares two different methods used to determining the facial types, the anthropometric and the cephalometric methods. METHODS: The sample consisted of sixty-four Brazilian individuals, adults, Caucasian, of both genders, who agreed to participate in this research. All individuals had lateral cephalograms and facial frontal photographs. The facial types were determined by the Vert Index (cephalometric and the Facial Index (photographs. RESULTS: The agreement analysis (Kappa, made for both types of analysis, found an agreement of 76.5%. CONCLUSIONS: We concluded that the Facial Index can be used as an adjunct to orthodontic diagnosis, or as an alternative method for pre-selection of a sample, avoiding that research subjects have to undergo unnecessary tests.INTRODUÇÃO: em Ortodontia, a determinação do tipo facial é um elemento-chave na prescrição de um diagnóstico correto. Nos primórdios de nossa especialidade, a observação e a medição das estruturas craniofaciais eram feitas diretamente na face, em fotografias ou em modelos de gesso. Com o desenvolvimento dos métodos radiográficos, a análise cefalométrica foi substituindo a análise facial direta. Visando legitimar o estudo dos tecidos moles faciais, esse trabalho comparou a determinação do tipo facial pelos métodos antropométrico e cefalométrico. MÉTODOS: a amostra constou de sessenta e quatro indivíduos brasileiros, adultos, leucodermas, de ambos os sexos, que aceitaram participar da pesquisa. De todos os indivíduos da amostra

  12. Gradient pattern analysis applied to galaxy morphology

    Science.gov (United States)

    Rosa, R. R.; de Carvalho, R. R.; Sautter, R. A.; Barchi, P. H.; Stalder, D. H.; Moura, T. C.; Rembold, S. B.; Morell, D. R. F.; Ferreira, N. C.

    2018-06-01

    Gradient pattern analysis (GPA) is a well-established technique for measuring gradient bilateral asymmetries of a square numerical lattice. This paper introduces an improved version of GPA designed for galaxy morphometry. We show the performance of the new method on a selected sample of 54 896 objects from the SDSS-DR7 in common with Galaxy Zoo 1 catalogue. The results suggest that the second gradient moment, G2, has the potential to dramatically improve over more conventional morphometric parameters. It separates early- from late-type galaxies better (˜ 90 per cent) than the CAS system (C˜ 79 per cent, A˜ 50 per cent, S˜ 43 per cent) and a benchmark test shows that it is applicable to hundreds of thousands of galaxies using typical processing systems.

  13. Energy analysis applied to uranium resource estimation

    International Nuclear Information System (INIS)

    Mortimer, N.D.

    1980-01-01

    It is pointed out that fuel prices and ore costs are interdependent, and that in estimating ore costs (involving the cost of fuels used to mine and process the uranium) it is necessary to take into account the total use of energy by the entire fuel system, through the technique of energy analysis. The subject is discussed, and illustrated with diagrams, under the following heads: estimate of how total workable resources would depend on production costs; sensitivity of nuclear electricity prices to ore costs; variation of net energy requirement with ore grade for a typical PWR reactor design; variation of average fundamental cost of nuclear electricity with ore grade; variation of cumulative uranium resources with current maximum ore costs. (U.K.)

  14. Toward applied behavior analysis of life aloft

    Science.gov (United States)

    Brady, J. V.

    1990-01-01

    This article deals with systems at multiple levels, at least from cell to organization. It also deals with learning, decision making, and other behavior at multiple levels. Technological development of a human behavioral ecosystem appropriate to space environments requires an analytic and synthetic orientation, explicitly experimental in nature, dictated by scientific and pragmatic considerations, and closely approximating procedures of established effectiveness in other areas of natural science. The conceptual basis of such an approach has its roots in environmentalism which has two main features: (1) knowledge comes from experience rather than from innate ideas, divine revelation, or other obscure sources; and (2) action is governed by consequences rather than by instinct, reason, will, beliefs, attitudes or even the currently fashionable cognitions. Without an experimentally derived data base founded upon such a functional analysis of human behavior, the overgenerality of "ecological systems" approaches render them incapable of ensuring the successful establishment of enduring space habitats. Without an experimentally derived function account of individual behavioral variability, a natural science of behavior cannot exist. And without a natural science of behavior, the social sciences will necessarily remain in their current status as disciplines of less than optimal precision or utility. Such a functional analysis of human performance should provide an operational account of behavior change in a manner similar to the way in which Darwin's approach to natural selection accounted for the evolution of phylogenetic lines (i.e., in descriptive, nonteleological terms). Similarly, as Darwin's account has subsequently been shown to be consonant with information obtained at the cellular level, so too should behavior principles ultimately prove to be in accord with an account of ontogenetic adaptation at a biochemical level. It would thus seem obvious that the most

  15. Probabilistic safety analysis applied to RBMK reactors

    International Nuclear Information System (INIS)

    Gerez Martin, L.; Fernandez Ramos, P.

    1995-01-01

    The project financed by the European Union ''Revision of RBMK Reactor Safety was divided into nine Topic Groups dealing with different aspects of safety. The area covered by Topic Group 9 was Probabilistic Safety Analysis. TG9 will have touched on some of the problems discussed by other groups, although in terms of the systematic quantification of the impact of design characteristics and RBMK reactor operating practices on the risk of core damage. On account of the reduced time scale and the resources available for the project, the analysis was made using a simplified method based on the results of PSAs conducted in Western countries and on the judgement of the group members. The simplifies method is based on the concepts of Qualification, Redundancy and Automatic Actuation of the systems considered. PSA experience shows that systems complying with the above-mentioned concepts have a failure probability of 1.0E-3 when redundancy is simple, ie two similar equipment items capable of carrying out the same function. In general terms, this value can be considered to be dominated by potential common cause failures. The value considered above changes according to factors that have a positive effect upon it, such as an additional redundancy with a different equipment item (eg a turbo pumps and a motor pump), individual trains with good separations, etc, or a negative effect, such as the absence of suitable periodical tests, the need for operators to perform manual operations, etc. Similarly, possible actions required by the operator during accident sequences are assigned failure probability values between 1 and 1.0E-4, according to the complexity of the action (including local actions to be performed outside the control room) and the time available

  16. Exercise and Bone Density: Meta-Analysis

    National Research Council Canada - National Science Library

    Kelley, George A; Sharpe-Kelley, Kristi

    2007-01-01

    .... Since no meta-analysis had existed using individual patient data (IPD) to examine the effects of exercise on BMD, our second two-year period of funding was devoted to examining the feasibility...

  17. Digital photoelastic analysis applied to implant dentistry

    Science.gov (United States)

    Ramesh, K.; Hariprasad, M. P.; Bhuvanewari, S.

    2016-12-01

    Development of improved designs of implant systems in dentistry have necessitated the study of stress fields in the implant regions of the mandible/maxilla for better understanding of the biomechanics involved. Photoelasticity has been used for various studies related to dental implants in view of whole field visualization of maximum shear stress in the form of isochromatic contours. The potential of digital photoelasticity has not been fully exploited in the field of implant dentistry. In this paper, the fringe field in the vicinity of the connected implants (All-On-Four® concept) is analyzed using recent advances in digital photoelasticity. Initially, a novel 3-D photoelastic model making procedure, to closely mimic all the anatomical features of the human mandible is proposed. By choosing appropriate orientation of the model with respect to the light path, the essential region of interest were sought to be analysed while keeping the model under live loading conditions. Need for a sophisticated software module to carefully identify the model domain has been brought out. For data extraction, five-step method is used and isochromatics are evaluated by twelve fringe photoelasticity. In addition to the isochromatic fringe field, whole field isoclinic data is also obtained for the first time in implant dentistry, which could throw important information in improving the structural stability of the implant systems. Analysis is carried out for the implant in the molar as well as the incisor region. In addition, the interaction effects of loaded molar implant on the incisor area are also studied.

  18. Contact resistance problems applying ERT on low bulk density forested stony soils. Is there a solution?

    Science.gov (United States)

    Deraedt, Deborah; Touzé, Camille; Robert, Tanguy; Colinet, Gilles; Degré, Aurore; Garré, Sarah

    2015-04-01

    Electrical resistivity tomography (ERT) has often been put forward as a promising tool to quantify soil water and solute fluxes in a non-invasive way. In our experiment, we wanted to determine preferential flow processes along a forested hillslope using a saline tracer with ERT. The experiment was conducted in the Houille watershed, subcatchment of the Meuse located in the North of Belgian Ardennes (50° 1'52.6'N, 4° 53'22.5'E). The climate is continental but the soil under spruce (Picea abies (L.) Karst.) and Douglas fire stand (Pseudotsuga menziesii (Mirb.) Franco) remains quite dry (19% WVC in average) during the whole year. The soil is Cambisol and the parent rock is Devonian schist covered with variable thickness of silty loam soil. The soil density ranges from 1.13 to 1.87 g/cm3 on average. The stone content varies from 20 to 89% and the soil depth fluctuates between 70 and 130 cm. The ERT tests took place on June 1st 2012, April 1st, 2nd and 3rd 2014 and May 12th 2014. We used the Terrameter LS 12 channels (ABEM, Sweden) in 2012 test and the DAS-1 (Multi-Phase Technologies, United States) in 2014. Different electrode configurations and arrays were adopted for different dates (transect and grid arrays and Wenner - Schlumberger, Wenner alpha and dipole-dipole configurations). During all tests, we systematically faced technical problems, mainly related to bad electrode contact. The recorded data show values of contact resistance above 14873 Ω (our target value would be below 3000 Ω). Subsequently, we tried to improve the contact by predrilling the soil and pouring water in the electrode holes. The contact resistance improved to 14040 Ω as minimum. The same procedure with liquid mud was then tested to prevent quick percolation of the water from the electrode location. As a result, the lower contact resistance dropped to 11745 Ω. Finally, we applied about 25 litre of saline solution (CaCl2, 0.75g/L) homogeneously on the electrode grid. The minimum value of

  19. A national and international analysis of changing forest density.

    Directory of Open Access Journals (Sweden)

    Aapo Rautiainen

    Full Text Available Like cities, forests grow by spreading out or by growing denser. Both inventories taken steadily by a single nation and other inventories gathered recently from many nations by the United Nations confirm the asynchronous effects of changing area and of density or volume per hectare. United States forests spread little after 1953, while growing density per hectare increased national volume and thus sequestered carbon. The 2010 United Nations appraisal of global forests during the briefer span of two decades after 1990 reveals a similar pattern: A slowing decline of area with growing volume means growing density in 68 nations encompassing 72% of reported global forest land and 68% of reported global carbon mass. To summarize, the nations were placed in 5 regions named for continents. During 1990-2010 national density grew unevenly, but nevertheless grew in all regions. Growing density was responsible for substantially increasing sequestered carbon in the European and North American regions, despite smaller changes in area. Density nudged upward in the African and South American regions as area loss outstripped the loss of carbon. For the Asian region, density grew in the first decade and fell slightly in the second as forest area expanded. The different courses of area and density disqualify area as a proxy for volume and carbon. Applying forestry methods traditionally used to measure timber volumes still offers a necessary route to measuring carbon stocks. With little expansion of forest area, managing for timber growth and density offered a way to increase carbon stocks.

  20. IAEA advisory group meeting on basic and applied problems of nuclear level densities

    International Nuclear Information System (INIS)

    Bhat, M.R.

    1983-06-01

    Separate entries were made in the data base for 17 of the 19 papers included. Two papers were previously included in the data base. Workshop reports are included on (1) nuclear level density theories and nuclear model reaction cross-section calculations and (2) extraction of nuclear level density information from experimental data

  1. Applied research of environmental monitoring using instrumental neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Sam; Moon, Jong Hwa; Chung, Young Ju

    1997-08-01

    This technical report is written as a guide book for applied research of environmental monitoring using Instrumental Neutron Activation Analysis. The contents are as followings; sampling and sample preparation as a airborne particulate matter, analytical methodologies, data evaluation and interpretation, basic statistical methods of data analysis applied in environmental pollution studies. (author). 23 refs., 7 tabs., 9 figs.

  2. Bouguer density analysis using nettleton method at Banten NPP site

    International Nuclear Information System (INIS)

    Yuliastuti; Hadi Suntoko; Yarianto SBS

    2017-01-01

    Sub-surface information become crucial in determining a feasible NPP site that safe from external hazards. Gravity survey which result as density information, is essential to understand the sub-surface structure. Nevertheless, overcorrected or under corrected will lead to a false interpretation. Therefore, density correction in term of near-surface average density or Bouguer density is necessary to be calculated. The objective of this paper is to estimate and analyze Bouguer density using Nettleton method at Banten NPP Site. Methodology used in this paper is Nettleton method that applied in three different slices (A-B, A-C and A-D) with density assumption range between 1700 and 3300 kg/m"3. Nettleton method is based on minimum correlation between gravity anomaly and topography to determine density correction. The result shows that slice A-B which covers rough topography difference, Nettleton method failed. While using the other two slices, Nettleton method yield with a different density value, 2700 kg/m"3 for A-C and 2300 kg/m"3 for A-D. A-C provides the lowest correlation value which represents the Upper Banten tuff and Gede Mt. volcanic rocks in accordance with Quartenary rocks exist in the studied area. (author)

  3. Helical variation of density profiles and fluctuations in the tokamak pedestal with applied 3D fields and implications for confinement

    Science.gov (United States)

    Wilcox, R. S.; Rhodes, T. L.; Shafer, M. W.; Sugiyama, L. E.; Ferraro, N. M.; Lyons, B. C.; McKee, G. R.; Paz-Soldan, C.; Wingen, A.; Zeng, L.

    2018-05-01

    Small 3D perturbations to the magnetic field in DIII-D ( δB /B ˜2 ×10-4 ) result in large modulations of density fluctuation amplitudes in the pedestal, which are shown using Doppler backscattering measurements to vary by a factor of 2. Helical perturbations of equilibrium density within flux surfaces have previously been observed in the pedestal of DIII-D plasmas when 3D fields are applied and were correlated with density fluctuation asymmetries in the pedestal. These intra-surface density and pressure variations are shown through two fluid MHD modeling studies using the M3D-C1 code to be due to the misalignment of the density and temperature equilibrium iso-surfaces in the pedestal region. This modeling demonstrates that the phase shift between the two iso-surfaces corresponds to the diamagnetic direction of the two species, with the mass density surfaces shifted in the ion diamagnetic direction relative to the temperature and magnetic flux iso-surfaces. The resulting pedestal density, potential, and turbulence asymmetries within flux surfaces near the separatrix may be at least partially responsible for several poorly understood phenomena that occur with the application of 3D fields in tokamaks, including density pump out and the increase in power required to transition from L- to H-mode.

  4. Applying homotopy analysis method for solving differential-difference equation

    International Nuclear Information System (INIS)

    Wang Zhen; Zou Li; Zhang Hongqing

    2007-01-01

    In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations

  5. Determination of photocarrier density under continuous photoirradiation using spectroscopic techniques as applied to polymer: Fullerene blend films

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Katsuichi, E-mail: kkane@sci.osaka-cu.ac.jp; Nakatani, Hitomi; Domoto, Shinya [Department of Physics, Osaka City University, 3-3-138 Sugimoto, Sumiyoshi-ku, Osaka 558-8585 (Japan)

    2014-10-28

    We propose a method to determine the density of photocarrier under continuous photoirradiation in conjugated polymers using spectroscopic signals obtained by photoinduced absorption (PIA) measurements. The bleaching signals in the PIA measurements of polymer films and the steady-state absorption signals of oxidized polymer solution are employed to determine the photocarrier density. The method is applied to photocarriers of poly (3-hexylthiophene) (P3HT) in a blended film consisting of P3HT and [6,6]-phenyl C61 butyric acid methyl ester (PCBM). The photocarrier density under continuous photoirradiation of 580 mW/cm{sup 2} is determined to be 3.5 × 10{sup 16 }cm{sup −3}. Using a trend of the carrier density increasing in proportion to the square root of photo-excitation intensity, we provide a general formula to estimate the photocarrier density under simulated 1 sun solar irradiation for the P3HT: PCBM film of an arbitrary thickness. We emphasize that the method proposed in this study enables an estimate of carrier density without measuring a current and can be applied to films with no electrodes as well as to devices.

  6. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  7. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  8. Non destructive defect detection by spectral density analysis.

    Science.gov (United States)

    Krejcar, Ondrej; Frischer, Robert

    2011-01-01

    The potential nondestructive diagnostics of solid objects is discussed in this article. The whole process is accomplished by consecutive steps involving software analysis of the vibration power spectrum (eventually acoustic emissions) created during the normal operation of the diagnosed device or under unexpected situations. Another option is to create an artificial pulse, which can help us to determine the actual state of the diagnosed device. The main idea of this method is based on the analysis of the current power spectrum density of the received signal and its postprocessing in the Matlab environment with a following sample comparison in the Statistica software environment. The last step, which is comparison of samples, is the most important, because it is possible to determine the status of the examined object at a given time. Nowadays samples are compared only visually, but this method can't produce good results. Further the presented filter can choose relevant data from a huge group of data, which originate from applying FFT (Fast Fourier Transform). On the other hand, using this approach they can be subjected to analysis with the assistance of a neural network. If correct and high-quality starting data are provided to the initial network, we are able to analyze other samples and state in which condition a certain object is. The success rate of this approximation, based on our testing of the solution, is now 85.7%. With further improvement of the filter, it could be even greater. Finally it is possible to detect defective conditions or upcoming limiting states of examined objects/materials by using only one device which contains HW and SW parts. This kind of detection can provide significant financial savings in certain cases (such as continuous casting of iron where it could save hundreds of thousands of USD).

  9. Life insurance density and penetration: panel data analysis across countries

    OpenAIRE

    Urbanavičiūtė, Greta

    2016-01-01

    Life Insurance Density and Penetration: Panel Data Analysis Across Countries This bachelor thesis examines two key indicators in the life insurance market: density and penetration. The main purpose is to analyse which factors have the biggest impact on these two indicators in 39 countries around the world. Panel data models, which represent the collected data best, were created. This paper examines the latest public data available in 39 countries, including the Baltic States, and new signific...

  10. Animal Research in the "Journal of Applied Behavior Analysis"

    Science.gov (United States)

    Edwards, Timothy L.; Poling, Alan

    2011-01-01

    This review summarizes the 6 studies with nonhuman animal subjects that have appeared in the "Journal of Applied Behavior Analysis" and offers suggestions for future research in this area. Two of the reviewed articles described translational research in which pigeons were used to illustrate and examine behavioral phenomena of applied significance…

  11. Minimum entropy density method for the time series analysis

    Science.gov (United States)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  12. Optical excitation and electron relaxation dynamics at semiconductor surfaces: a combined approach of density functional and density matrix theory applied to the silicon (001) surface

    Energy Technology Data Exchange (ETDEWEB)

    Buecking, N

    2007-11-05

    In this work a new theoretical formalism is introduced in order to simulate the phononinduced relaxation of a non-equilibrium distribution to equilibrium at a semiconductor surface numerically. The non-equilibrium distribution is effected by an optical excitation. The approach in this thesis is to link two conventional, but approved methods to a new, more global description: while semiconductor surfaces can be investigated accurately by density-functional theory, the dynamical processes in semiconductor heterostructures are successfully described by density matrix theory. In this work, the parameters for density-matrix theory are determined from the results of density-functional calculations. This work is organized in two parts. In Part I, the general fundamentals of the theory are elaborated, covering the fundamentals of canonical quantizations as well as the theory of density-functional and density-matrix theory in 2{sup nd} order Born approximation. While the formalism of density functional theory for structure investigation has been established for a long time and many different codes exist, the requirements for density matrix formalism concerning the geometry and the number of implemented bands exceed the usual possibilities of the existing code in this field. A special attention is therefore attributed to the development of extensions to existing formulations of this theory, where geometrical and fundamental symmetries of the structure and the equations are used. In Part II, the newly developed formalism is applied to a silicon (001)surface in a 2 x 1 reconstruction. As first step, density-functional calculations using the LDA functional are completed, from which the Kohn-Sham-wave functions and eigenvalues are used to calculate interaction matrix elements for the electron-phonon-coupling an the optical excitation. These matrix elements are determined for the optical transitions from valence to conduction bands and for electron-phonon processes inside the

  13. Dimensional Analysis with space discrimination applied to Fickian difussion phenomena

    International Nuclear Information System (INIS)

    Diaz Sanchidrian, C.; Castans, M.

    1989-01-01

    Dimensional Analysis with space discrimination is applied to Fickian difussion phenomena in order to transform its partial differen-tial equations into ordinary ones, and also to obtain in a dimensionl-ess fom the Ficks second law. (Author)

  14. Schlieren technique applied to the arc temperature measurement in a high energy density cutting torch

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Artana, G.; Kelly, H.

    2010-01-01

    Plasma temperature and radial density profiles of the plasma species in a high energy density cutting arc have been obtained by using a quantitative schlieren technique. A Z-type two-mirror schlieren system was used in this research. Due to its great sensibility such technique allows measuring plasma composition and temperature from the arc axis to the surrounding medium by processing the gray-level contrast values of digital schlieren images recorded at the observation plane for a given position of a transverse knife located at the exit focal plane of the system. The technique has provided a good visualization of the plasma flow emerging from the nozzle and its interactions with the surrounding medium and the anode. The obtained temperature values are in good agreement with those values previously obtained by the authors on the same torch using Langmuir probes.

  15. Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring

    DEFF Research Database (Denmark)

    Kallenberg, Michiel Gijsbertus J.; Petersen, Peter Kersten; Nielsen, Mads

    2016-01-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used...... as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime...... and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy...

  16. 3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples

    Science.gov (United States)

    Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.

    2015-01-01

    In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible

  17. The Significance of Regional Analysis in Applied Geography.

    Science.gov (United States)

    Sommers, Lawrence M.

    Regional analysis is central to applied geographic research, contributing to better planning and policy development for a variety of societal problems facing the United States. The development of energy policy serves as an illustration of the capabilities of this type of analysis. The United States has had little success in formulating a national…

  18. Potential drops supported by ion density cavities in the dynamic response of a plasma diode to an applied field

    International Nuclear Information System (INIS)

    Bohm, M.; Torven, S.

    1990-06-01

    Experiments have shown that an applied voltage drop may either be supported by a cathode sheath or by a quasi-linear variation over the plasma lasting for several electron transit times. In the latter case an ion density cavity existed initially. An analytical model and numerical simulations are used to show that a cavity gives rise to a quasi-linear potential variation for applied voltage drops below a certain critical value. For larger values the drop concentrates to a cathode sheath. The quasi-linear profile steepens to a double layer for large cavity depths. (authors)

  19. Difficulties in applying pure Kohn-Sham density functional theory electronic structure methods to protein molecules

    Science.gov (United States)

    Rudberg, Elias

    2012-02-01

    Self-consistency-based Kohn-Sham density functional theory (KS-DFT) electronic structure calculations with Gaussian basis sets are reported for a set of 17 protein-like molecules with geometries obtained from the Protein Data Bank. It is found that in many cases such calculations do not converge due to vanishing HOMO-LUMO gaps. A sequence of polyproline I helix molecules is also studied and it is found that self-consistency calculations using pure functionals fail to converge for helices longer than six proline units. Since the computed gap is strongly correlated to the fraction of Hartree-Fock exchange, test calculations using both pure and hybrid density functionals are reported. The tested methods include the pure functionals BLYP, PBE and LDA, as well as Hartree-Fock and the hybrid functionals BHandHLYP, B3LYP and PBE0. The effect of including solvent molecules in the calculations is studied, and it is found that the inclusion of explicit solvent molecules around the protein fragment in many cases gives a larger gap, but that convergence problems due to vanishing gaps still occur in calculations with pure functionals. In order to achieve converged results, some modeling of the charge distribution of solvent water molecules outside the electronic structure calculation is needed. Representing solvent water molecules by a simple point charge distribution is found to give non-vanishing HOMO-LUMO gaps for the tested protein-like systems also for pure functionals.

  20. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  1. Difficulties in applying pure Kohn-Sham density functional theory electronic structure methods to protein molecules

    International Nuclear Information System (INIS)

    Rudberg, Elias

    2012-01-01

    Self-consistency-based Kohn-Sham density functional theory (KS-DFT) electronic structure calculations with Gaussian basis sets are reported for a set of 17 protein-like molecules with geometries obtained from the Protein Data Bank. It is found that in many cases such calculations do not converge due to vanishing HOMO-LUMO gaps. A sequence of polyproline I helix molecules is also studied and it is found that self-consistency calculations using pure functionals fail to converge for helices longer than six proline units. Since the computed gap is strongly correlated to the fraction of Hartree-Fock exchange, test calculations using both pure and hybrid density functionals are reported. The tested methods include the pure functionals BLYP, PBE and LDA, as well as Hartree-Fock and the hybrid functionals BHandHLYP, B3LYP and PBE0. The effect of including solvent molecules in the calculations is studied, and it is found that the inclusion of explicit solvent molecules around the protein fragment in many cases gives a larger gap, but that convergence problems due to vanishing gaps still occur in calculations with pure functionals. In order to achieve converged results, some modeling of the charge distribution of solvent water molecules outside the electronic structure calculation is needed. Representing solvent water molecules by a simple point charge distribution is found to give non-vanishing HOMO-LUMO gaps for the tested protein-like systems also for pure functionals. (fast track communication)

  2. Application of texture analysis method for mammogram density classification

    Science.gov (United States)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  3. The interaction between theory and experiment in charge density analysis

    International Nuclear Information System (INIS)

    Coppens, Phillip

    2013-01-01

    The field of x-ray charge density analysis has gradually morphed into an area benefiting from the strong interactions between theoreticians and experimentalists, leading to new concepts on chemical bonding and of intermolecular interactions in condensed phases. Some highlights of the developments culminating in the 2013 Aminoff Award are described in this paper. (comment)

  4. Analysis of optimum density of forest roads in rural properties

    Directory of Open Access Journals (Sweden)

    Flávio Cipriano de Assis do Carmo

    2013-09-01

    Full Text Available This study analyzed the density of roads in rural properties in the south of the Espírito Santo and compared it with the calculation of the optimal density in forestry companies in steep areas. The work was carried out in six small rural properties based on the costs of roads of forest use, wood extraction and the costs of loss of productive area. The technical analysis included time and movement study and productivity. The economic analysis included operational costs, production costs and returns for different scenarios of productivity (180m.ha-1, 220m.ha-1and 250 m.ha-1. According to the results, all the properties have densities of road well above the optimum, which reflects the lack of criteria in the planning of the forest stands, resulting in a inadequate use of plantation area. Property 1 had the highest density of roads (373.92 m.ha-1 and the property 5 presented the lowest density (111.56 m.ha-1.

  5. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  6. MADNESS applied to density functional theory in chemistry and nuclear physics

    International Nuclear Information System (INIS)

    Fann, G I; Harrison, R J; Beylkin, G; Jia, J; Hartman-Baker, R; Shelton, W A; Sugiki, S

    2007-01-01

    We describe some recent mathematical results in constructing computational methods that lead to the development of fast and accurate multiresolution numerical methods for solving quantum chemistry and nuclear physics problems based on Density Functional Theory (DFT). Using low separation rank representations of functions and operators in conjunction with representations in multiwavelet bases, we developed a multiscale solution method for integral and differential equations and integral transforms. The Poisson equation, the Schrodinger equation, and the projector on the divergence free functions provide important examples with a wide range of applications in computational chemistry, nuclear physics, computational electromagnetic and fluid dynamics. We have implemented this approach along with adaptive representations of operators and functions in the multiwavelet basis and low separation rank (LSR) approximation of operators and functions. These methods have been realized and implemented in a software package called Multiresolution Adaptive Numerical Evaluation for Scientific Simulation (MADNESS)

  7. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  8. Applying ethnic-specific bone mineral density T-scores to Chinese women in the USA.

    Science.gov (United States)

    Lo, J C; Kim, S; Chandra, M; Ettinger, B

    2016-12-01

    Caucasian reference data are used to classify bone mineral density in US women of all races. However, use of Chinese American reference data yields lower osteoporosis prevalence in Chinese women. The reduction in osteoporosis labeling may be relevant for younger Chinese women at low fracture risk. Caucasian reference data are used for osteoporosis classification in US postmenopausal women regardless of race, including Asians who tend to have lower bone mineral density (BMD) than women of white race. This study examines BMD classification by ethnic T-scores for Chinese women. Using BMD data in a Northern California healthcare population, Chinese women aged 50-79 years were compared to age-matched white women (1:5 ratio), with femoral neck (FN), total hip (TH), and lumbar spine (LS) T-scores calculated using Caucasian versus Chinese American reference data. Comparing 4039 Chinese and 20,195 white women (44.8 % age 50-59 years, 37.5 % age 60-69 years, 17.7 % age 70-79 years), Chinese women had lower BMD T-scores at the FN, TH, and LS (median T-score 0.29-0.72 units lower across age groups, p age 50-64 years and 43.2 to 21.0 % for age 65-79 years). Use of Chinese American BMD reference data yields higher (ethnic) T-scores by 0.4-0.5 units, with a large proportion of Chinese women reclassified from osteoporosis to osteopenia. The reduction in osteoporosis labeling with ethnic T-scores may be relevant for younger Chinese women at low fracture risk.

  9. Fourier convergence analysis applied to neutron diffusion Eigenvalue problem

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Noh, Jae Man; Joo, Hyung Kook

    2004-01-01

    Fourier error analysis has been a standard technique for the stability and convergence analysis of linear and nonlinear iterative methods. Though the methods can be applied to Eigenvalue problems too, all the Fourier convergence analyses have been performed only for fixed source problems and a Fourier convergence analysis for Eigenvalue problem has never been reported. Lee et al proposed new 2-D/1-D coupling methods and they showed that the new ones are unconditionally stable while one of the two existing ones is unstable at a small mesh size and that the new ones are better than the existing ones in terms of the convergence rate. In this paper the convergence of method A in reference 4 for the diffusion Eigenvalue problem was analyzed by the Fourier analysis. The Fourier convergence analysis presented in this paper is the first one applied to a neutronics eigenvalue problem to the best of our knowledge

  10. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  11. Urinary density measurement and analysis methods in neonatal unit care

    Directory of Open Access Journals (Sweden)

    Maria Vera Lúcia Moreira Leitão Cardoso

    2013-09-01

    Full Text Available The objective was to assess urine collection methods through cotton in contact with genitalia and urinary collector to measure urinary density in newborns. This is a quantitative intervention study carried out in a neonatal unit of Fortaleza-CE, Brazil, in 2010. The sample consisted of 61 newborns randomly chosen to compose the study group. Most neonates were full term (31/50.8% males (33/54%. Data on urinary density measurement through the methods of cotton and collector presented statistically significant differences (p<0.05. The analysis of interquartile ranges between subgroups resulted in statistical differences between urinary collector/reagent strip (1005 and cotton/reagent strip (1010, however there was no difference between urinary collector/ refractometer (1008 and cotton/ refractometer. Therefore, further research should be conducted with larger sampling using methods investigated in this study and whenever possible, comparing urine density values to laboratory tests.

  12. Structure of single-particle nuclear densities from Hartree-Fock theory and model independent analysis

    International Nuclear Information System (INIS)

    Starodubskij, V.E.; Shaginyan, V.R.

    1979-01-01

    Friar-Negele method is applied to determine the static densities of neutrons and nuclear matter from the fast proton-nuclei elastic scattering data. This model-independent analysis (MIA) has been carried out for 28 Si, sup(32,34)S, sup(40,42,44,48)Ca, 48 Ti, sup(58,60)Ni, 90 Zr, 208 Pb nuclei. The binding energies, rms radii, densities and scattering cross sections of 1 GeV-proton are calculated in the framework of the Hartree-Fock theory (HF) with Skyrme's interaction. The HF and MIA densities and cross sections have been compared to draw a conclusion on the quality of the HF densities. Calculation of the cross sections has included the spin-orbit interaction with parameters taken from the polarization data

  13. Spectral analysis and filter theory in applied geophysics

    CERN Document Server

    Buttkus, Burkhard

    2000-01-01

    This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli­ cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval­ uated, and instructions provided for their practical application. Be­ sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob­ served data, maximum-entropy spectral analysis and maximum-like­ lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation­ ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil­ ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...

  14. Electron beam irradiation process applied to primary and secondary recycled high density polyethylene

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Jéssica R.; Moura, Eduardo de; Geraldo, Áurea B.C., E-mail: ageraldo@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    Plastic bags, packaging and furniture items are examples of plastic utilities always present in life. However, the end-of-life of plastics impacts the environment because of this ubiquity and also often their high degradation time. Recycling processes are important in this scenario because they offer many solutions to this problem. Basically, four ways are known for plastic recycling: primary recycling, which consists in re-extrusion of clean plastic scraps from a production plant; secondary recycling, that uses end-of-life products that generally are reduced in size by extrusion to obtain a more desirable shape for reprocessing (pellets and powder); tertiary recover which is related to thermo-chemical methods to produce fuels and petrochemical feedstock; and quaternary route, that is related to energy recovery and it is done in appropriate reactors. In this work, high density polyethylene (HDPE) was recovered to simulate empirically the primary and secondary recycling ways using materials which ranged from pristine to 20-fold re-extrused materials. The final 20-fold recycled thermoplastic was irradiated in an electron beam accelerator under a dose rate of 22.4 kGy/s and absorbed doses of 50 kGy and 100 kGy. The characterization of HDPE in distinct levels of recovering was performed by infrared spectroscopy (FTIR) and thermogravimetric degradation. In the HDPE recycling, degradation and crosslinking are consecutive processes; degradation is very noticeable in the 20-fold recycled product. Despite this, the 20-fold recycled product presents crosslinking after irradiation process and the post-irradiation product presents similarities in spectroscopic and thermal degradation characteristics of pristine, irradiated HDPE. These results are discussed. (author)

  15. Electron beam irradiation process applied to primary and secondary recycled high density polyethylene

    International Nuclear Information System (INIS)

    Cardoso, Jéssica R.; Moura, Eduardo de; Geraldo, Áurea B.C.

    2017-01-01

    Plastic bags, packaging and furniture items are examples of plastic utilities always present in life. However, the end-of-life of plastics impacts the environment because of this ubiquity and also often their high degradation time. Recycling processes are important in this scenario because they offer many solutions to this problem. Basically, four ways are known for plastic recycling: primary recycling, which consists in re-extrusion of clean plastic scraps from a production plant; secondary recycling, that uses end-of-life products that generally are reduced in size by extrusion to obtain a more desirable shape for reprocessing (pellets and powder); tertiary recover which is related to thermo-chemical methods to produce fuels and petrochemical feedstock; and quaternary route, that is related to energy recovery and it is done in appropriate reactors. In this work, high density polyethylene (HDPE) was recovered to simulate empirically the primary and secondary recycling ways using materials which ranged from pristine to 20-fold re-extrused materials. The final 20-fold recycled thermoplastic was irradiated in an electron beam accelerator under a dose rate of 22.4 kGy/s and absorbed doses of 50 kGy and 100 kGy. The characterization of HDPE in distinct levels of recovering was performed by infrared spectroscopy (FTIR) and thermogravimetric degradation. In the HDPE recycling, degradation and crosslinking are consecutive processes; degradation is very noticeable in the 20-fold recycled product. Despite this, the 20-fold recycled product presents crosslinking after irradiation process and the post-irradiation product presents similarities in spectroscopic and thermal degradation characteristics of pristine, irradiated HDPE. These results are discussed. (author)

  16. Animal research in the Journal of Applied Behavior Analysis.

    Science.gov (United States)

    Edwards, Timothy L; Poling, Alan

    2011-01-01

    This review summarizes the 6 studies with nonhuman animal subjects that have appeared in the Journal of Applied Behavior Analysis and offers suggestions for future research in this area. Two of the reviewed articles described translational research in which pigeons were used to illustrate and examine behavioral phenomena of applied significance (say-do correspondence and fluency), 3 described interventions that changed animals' behavior (self-injury by a baboon, feces throwing and spitting by a chimpanzee, and unsafe trailer entry by horses) in ways that benefited the animals and the people in charge of them, and 1 described the use of trained rats that performed a service to humans (land-mine detection). We suggest that each of these general research areas merits further attention and that the Journal of Applied Behavior Analysis is an appropriate outlet for some of these publications.

  17. Applied Behavior Analysis Is a Science And, Therefore, Progressive

    Science.gov (United States)

    Leaf, Justin B.; Leaf, Ronald; McEachin, John; Taubman, Mitchell; Ala'i-Rosales, Shahla; Ross, Robert K.; Smith, Tristram; Weiss, Mary Jane

    2016-01-01

    Applied behavior analysis (ABA) is a science and, therefore, involves progressive approaches and outcomes. In this commentary we argue that the spirit and the method of science should be maintained in order to avoid reductionist procedures, stifled innovation, and rote, unresponsive protocols that become increasingly removed from meaningful…

  18. X-ray fluorescence spectrometry applied to soil analysis

    International Nuclear Information System (INIS)

    Salvador, Vera Lucia Ribeiro; Sato, Ivone Mulako; Scapin Junior, Wilson Santo; Scapin, Marcos Antonio; Imakima, Kengo

    1997-01-01

    This paper studies the X-ray fluorescence spectrometry applied to the soil analysis. A comparative study of the WD-XRFS and ED-XRFS techniques was carried out by using the following soil samples: SL-1, SOIL-7 and marine sediment SD-M-2/TM, from IAEA, and clay, JG-1a from Geological Survey of Japan (GSJ)

  19. Progressive-Ratio Schedules and Applied Behavior Analysis

    Science.gov (United States)

    Poling, Alan

    2010-01-01

    Establishing appropriate relations between the basic and applied areas of behavior analysis has been of long and persistent interest to the author. In this article, the author illustrates that there is a direct relation between how hard an organism will work for access to an object or activity, as indexed by the largest ratio completed under a…

  20. B. F. Skinner's Contributions to Applied Behavior Analysis

    Science.gov (United States)

    Morris, Edward K.; Smith, Nathaniel G.; Altus, Deborah E.

    2005-01-01

    Our paper reviews and analyzes B. F. Skinner's contributions to applied behavior analysis in order to assess his role as the field's originator and founder. We found, first, that his contributions fall into five categorizes: the style and content of his science, his interpretations of typical and atypical human behavior, the implications he drew…

  1. Applied Behavior Analysis: Current Myths in Public Education

    Science.gov (United States)

    Fielding, Cheryl; Lowdermilk, John; Lanier, Lauren L.; Fannin, Abigail G.; Schkade, Jennifer L.; Rose, Chad A.; Simpson, Cynthia G.

    2013-01-01

    The effective use of behavior management strategies and related policies continues to be a debated issue in public education. Despite overwhelming evidence espousing the benefits of the implementation of procedures derived from principles based on the science of applied behavior analysis (ABA), educators often indicate many common misconceptions…

  2. Positive Behavior Support and Applied Behavior Analysis: A Familial Alliance

    Science.gov (United States)

    Dunlap, Glen; Carr, Edward G.; Horner, Robert H.; Zarcone, Jennifer R.; Schwartz, Ilene

    2008-01-01

    Positive behavior support (PBS) emerged in the mid-1980s as an approach for understanding and addressing problem behaviors. PBS was derived primarily from applied behavior analysis (ABA). Over time, however, PBS research and practice has incorporated evaluative methods, assessment and intervention procedures, and conceptual perspectives associated…

  3. Neutron activation analysis applied to energy and environment

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1975-01-01

    Neutron activation analysis was applied to a number of problems concerned with energy production and the environment. Burning of fossil fuel, the search for new sources of uranium, possible presence of toxic elements in food and water, and the relationship of trace elements to cardiovascular disease are some of the problems in which neutron activation was used. (auth)

  4. Structural reliability analysis applied to pipeline risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, M. [GL Industrial Services, Loughborough (United Kingdom); Mendes, Renato F.; Donato, Guilherme V.P. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Quantitative Risk Assessment (QRA) of pipelines requires two main components to be provided. These are models of the consequences that follow from some loss of containment incident, and models for the likelihood of such incidents occurring. This paper describes how PETROBRAS have used Structural Reliability Analysis for the second of these, to provide pipeline- and location-specific predictions of failure frequency for a number of pipeline assets. This paper presents an approach to estimating failure rates for liquid and gas pipelines, using Structural Reliability Analysis (SRA) to analyze the credible basic mechanisms of failure such as corrosion and mechanical damage. SRA is a probabilistic limit state method: for a given failure mechanism it quantifies the uncertainty in parameters to mathematical models of the load-resistance state of a structure and then evaluates the probability of load exceeding resistance. SRA can be used to benefit the pipeline risk management process by optimizing in-line inspection schedules, and as part of the design process for new construction in pipeline rights of way that already contain multiple lines. A case study is presented to show how the SRA approach has recently been used on PETROBRAS pipelines and the benefits obtained from it. (author)

  5. Effect of Magnetic Flux Density and Applied Current on Temperature, Velocity and Entropy Generation Distributions in MHD Pumps

    Directory of Open Access Journals (Sweden)

    M. Kiyasatfar

    2011-01-01

    Full Text Available In the present study, simulation of steady state, incompressible and fully developed laminar flow has been conducted in a magneto hydrodynamic (MHD pump. The governing equations are solved numerically by finite-difference method. The effect of the magnetic flux density and current on the flow and temperature distributions in a MHD pump is investigated. The obtained results showed that controlling the flow and the temperature is possible through the controlling of the applied current and the magnetic flux. Furthermore, the effects of the magnetic flux density and current on entropy generation in MHD pump are considered. Our presented numerical results are in good agreement with the experimental data showed in literature.

  6. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  7. Applied Fourier analysis from signal processing to medical imaging

    CERN Document Server

    Olson, Tim

    2017-01-01

    The first of its kind, this focused textbook serves as a self-contained resource for teaching from scratch the fundamental mathematics of Fourier analysis and illustrating some of its most current, interesting applications, including medical imaging and radar processing. Developed by the author from extensive classroom teaching experience, it provides a breadth of theory that allows students to appreciate the utility of the subject, but at as accessible a depth as possible. With myriad applications included, this book can be adapted to a one or two semester course in Fourier Analysis or serve as the basis for independent study. Applied Fourier Analysis assumes no prior knowledge of analysis from its readers, and begins by making the transition from linear algebra to functional analysis. It goes on to cover basic Fourier series and Fourier transforms before delving into applications in sampling and interpolation theory, digital communications, radar processing, medical i maging, and heat and wave equations. Fo...

  8. Automated analysis of autoradiographic grains density with scanning microspectrophotometer

    International Nuclear Information System (INIS)

    Han Pingrong

    1988-01-01

    The mouse ascites tumour cells were used as the specimen, and 3 H-thymidine was used as the tracer. The smears were treated without staining or with Eosin or Giemsa-Wright staining. The automatic analysis of autoradiographic silver grains was performed with UNIVAR-MSPM. The relationship between integrated optical density (IOD) and silver grain density (SGD) in cell nuclei was studied. The results showed that the IOD could reflect the SGD basically, the correlation coefficients being 0.922, 0.9118 and 0.6218. Since the G-W stained cell nuclei appeared blue, their IOD was influenced strongly, whereas the Eosin stained cell nuclei appeared light red, their IOD was influenced only slightly. The latter was recommended. This method could be used for studying the DNA synthesis and cell proliferating kinetics

  9. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  10. Equivalent Circulation Density Analysis of Geothermal Well by Coupling Temperature

    Directory of Open Access Journals (Sweden)

    Xiuhua Zheng

    2017-02-01

    Full Text Available The accurate control of the wellbore pressure not only prevents lost circulation/blowout and fracturing formation by managing the density of the drilling fluid, but also improves productivity by mitigating reservoir damage. Calculating the geothermal pressure of a geothermal well by constant parameters would easily bring big errors, as the changes of physical, rheological and thermal properties of drilling fluids with temperature are neglected. This paper researched the wellbore pressure coupling by calculating the temperature distribution with the existing model, fitting the rule of density of the drilling fluid with the temperature and establishing mathematical models to simulate the wellbore pressures, which are expressed as the variation of Equivalent Circulating Density (ECD under different conditions. With this method, the temperature and ECDs in the wellbore of the first medium-deep geothermal well, ZK212 Yangyi Geothermal Field in Tibet, were determined, and the sensitivity analysis was simulated by assumed parameters, i.e., the circulating time, flow rate, geothermal gradient, diameters of the wellbore, rheological models and regimes. The results indicated that the geothermal gradient and flow rate were the most influential parameters on the temperature and ECD distribution, and additives added in the drilling fluid should be added carefully as they change the properties of the drilling fluid and induce the redistribution of temperature. To ensure the safe drilling and velocity of pipes tripping into the hole, the depth and diameter of the wellbore are considered to control the surge pressure.

  11. Analysis scheme of density modulation experiments for particle confinements study

    International Nuclear Information System (INIS)

    Tanaka, K.; Michael, C.; Kawanata, K.; Tokuzawa, T.; Shoji, M.; Toi, K.; Gao, X.; Jie, Y.X.

    2005-01-01

    Density modulation experiments are one of the powerful experimental schemas to study particle confinements. The diffusion coefficients (D) and convection velocity (V), which is impossible to evaluated from particle balance in equilibrium state, can be separately obtained. And the estimated value of D and V are determined independent of absolute value of particle source rate, which is difficult to be obtained experimentally. However sensitivities and interpretation of D and V from modulation experiments should be taken care. In this paper, numerical techniques to solve particle balance equation of modulation components are described. Examples of analysis are shown from the data of LHD. And interpretations of results of modulation experiments are studied. (author)

  12. Analysis of concrete beams using applied element method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

  13. Apply Functional Modelling to Consequence Analysis in Supervision Systems

    DEFF Research Database (Denmark)

    Zhang, Xinxin; Lind, Morten; Gola, Giulio

    2013-01-01

    This paper will first present the purpose and goals of applying functional modelling approach to consequence analysis by adopting Multilevel Flow Modelling (MFM). MFM Models describe a complex system in multiple abstraction levels in both means-end dimension and whole-part dimension. It contains...... consequence analysis to practical or online applications in supervision systems. It will also suggest a multiagent solution as the integration architecture for developing tools to facilitate the utilization results of functional consequence analysis. Finally a prototype of the multiagent reasoning system...... causal relations between functions and goals. A rule base system can be developed to trace the causal relations and perform consequence propagations. This paper will illustrate how to use MFM for consequence reasoning by using rule base technology and describe the challenges for integrating functional...

  14. Analysis of Brick Masonry Wall using Applied Element Method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.

  15. Applications of uncertainty analysis to visual evaluation of density in radiographs

    International Nuclear Information System (INIS)

    Uchida, Suguru; Ohtsuka, Akiyoshi; Fujita, Hiroshi.

    1981-01-01

    Uncertainty analysis, developed as a method of absolute judgment in psychology, is applied to a method of radiographic image evaluation with perceptual fluctuations and to an examination of visual evaluation of density in radiographs. Subjects are composed of three groups of four neurosurgeons, four radiologic technologists and four nonprofessionals. By using a five-category rating scale, each observer is directed to classify 255 radiographs randomly presented without feedback. Characteristics of each observer and each group can be shown quantitatively by calculated information values. It is also described that bivariate uncertainty analysis or entropy method can be used to calculate the degree of agreement of evaluation. (author)

  16. Applications of uncertainty analysis to visual evaluation of density in radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Uchida, S [Gifu Univ. (Japan); Ohtsuka, A; Fujita, H

    1981-03-01

    Uncertainty analysis, developed as a method of absolute judgment in psychology, is applied to a method of radiographic image evaluation with perceptual fluctuations and to an examination of visual evaluation of density in radiographs. Subjects are composed of three groups of four neurosurgeons, four radiologic technologists and four nonprofessionals. By using a five-category rating scale, each observer is directed to classify 255 radiographs randomly presented without feedback. Characteristics of each observer and each group can be shown quantitatively by calculated information values. It is also described that bivariate uncertainty analysis or entropy method can be used to calculate the degree of agreement of evaluation.

  17. INTOR rescaling for non-intended plasma shape applying preliminary scalings for energy confinement and density limit

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1986-11-01

    On the basis of a simplified rescaling procedure with INTOR, as of Phase IIA Part 1, serving as reference case, alternative design points are discussed that take into account more recent findings on β-limits, density limits and possible extrapolations with respect to plasma elongation. Two tentative scalings for the energy confinement time as derived from ASDEX results and by Goldston are applied to find minimum size INTOR alternatives, which, of course, could be quite different for the two scalings. Large plasma elongation is needed for getting close to the original outlay for INTOR. The density limit according to some possible scalings requires some adjustment of the plasma temperature to above 10 keV. The neutron wall load, being the important parameter with respect to the INTOR test programme, can be practically kept at the reference level. For ASDEX confinement scaling this requires that an ignition margin of about 2 be adhered to. A sensitivity study on the impact of individual modifications in input assumptions of the order of 10% shows that only a limited range of such alternatives remains acceptable. (orig.)

  18. Applied research and development of neutron activation analysis

    International Nuclear Information System (INIS)

    Chung, Yong Sam; Moon, Jong Hwa; Kim, Sun Ha; Baek, Sung Ryel; Kim, Young Gi; Jung, Hwan Sung; Park, Kwang Won; Kang, Sang Hun; Lim, Jong Myoung

    2003-05-01

    The aims of this project are to establish the quality control system of Neutron Activation Analysis(NAA) due to increase of industrial needs for standard analytical method and to prepare and identify the standard operation procedure of NAA through practical testing for different analytical items. R and D implementations of analytical quality system using neutron irradiation facility and gamma-ray measurement system and automation of NAA facility in HANARO research reactor are as following ; 1) Establishment of NAA quality control system for the maintenance of best measurement capability and the promotion of utilization of HANARO research reactor 2) Improvement of analytical sensitivity for industrial applied technologies and establishment of certified standard procedures 3) Standardization and development of Prompt Gamma-ray Activation Analysis (PGAA) technology

  19. Dimensional analysis and extended hydrodynamic theory applied to long-rod penetration of ceramics

    Directory of Open Access Journals (Sweden)

    J.D. Clayton

    2016-08-01

    Full Text Available Principles of dimensional analysis are applied in a new interpretation of penetration of ceramic targets subjected to hypervelocity impact. The analysis results in a power series representation – in terms of inverse velocity – of normalized depth of penetration that reduces to the hydrodynamic solution at high impact velocities. Specifically considered are test data from four literature sources involving penetration of confined thick ceramic targets by tungsten long rod projectiles. The ceramics are AD-995 alumina, aluminum nitride, silicon carbide, and boron carbide. Test data can be accurately represented by the linear form of the power series, whereby the same value of a single fitting parameter applies remarkably well for all four ceramics. Comparison of the present model with others in the literature (e.g., Tate's theory demonstrates a target resistance stress that depends on impact velocity, linearly in the limiting case. Comparison of the present analysis with recent research involving penetration of thin ceramic tiles at lower typical impact velocities confirms the importance of target properties related to fracture and shear strength at the Hugoniot Elastic Limit (HEL only in the latter. In contrast, in the former (i.e., hypervelocity and thick target experiments, the current analysis demonstrates dominant dependence of penetration depth only by target mass density. Such comparisons suggest transitions from microstructure-controlled to density-controlled penetration resistance with increasing impact velocity and ceramic target thickness.

  20. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  1. Diffusing wave spectroscopy applied to material analysis and process control

    International Nuclear Information System (INIS)

    Lloyd, Christopher James

    1997-01-01

    coefficient. The inherent instability of high density suspensions instigated high speed analysis techniques capable of monitoring suspensions that were undergoing rapid change as well as suggesting novel methods for the evaluation of the state of sample dispersion. (author)

  2. Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis

    Science.gov (United States)

    Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.

    We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.

  3. A mechanistic analysis of density dependence in algal population dynamics

    Directory of Open Access Journals (Sweden)

    Adrian eBorlestean

    2015-04-01

    Full Text Available Population density regulation is a fundamental principle in ecology, but the specific process underlying functional expression of density dependence remains to be fully elucidated. One view contends that patterns of density dependence are largely fixed across a species irrespective of environmental conditions, whereas another is that the strength and expression of density dependence are fundamentally variable depending on the nature of exogenous or endogenous constraints acting on the population. We conducted a study investigating the expression of density dependence in Chlamydomonas spp. grown under a gradient from low to high nutrient density. We predicted that the relationship between per capita growth rate (pgr and population density would vary from concave up to concave down as nutrient density became less limiting and populations experienced weaker density regulation. Contrary to prediction, we found that the relationship between pgr and density became increasingly concave-up as nutrient levels increased. We also found that variation in pgr increased, and pgr levels reached higher maxima in nutrient-limited environments. Most likely, these results are attributable to population growth suppression in environments with high intraspecific competition due to limited nutrient resources. Our results suggest that density regulation is strongly variable depending on exogenous and endogenous processes acting on the population, implying that expression of density dependence depends extensively on local conditions. Additional experimental work should reveal the mechanisms influencing how the expression of density dependence varies across populations through space and time.

  4. Density Functional Theory applied to magnetic materials: Mn{sub 3}O{sub 4} at different hybrid functionals

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, R.A.P. [Department of Chemistry, State University of Ponta Grossa, Av. General Carlos Cavalcanti, 4748, 84030-900 Ponta Grossa, PR (Brazil); Lazaro, S.R. de, E-mail: srlazaro@upeg.br [Department of Chemistry, State University of Ponta Grossa, Av. General Carlos Cavalcanti, 4748, 84030-900 Ponta Grossa, PR (Brazil); Pianaro, S.A. [Department of Materials Engineering, State University of Ponta Grossa, Av. General Carlos Cavalcanti, 4748, 84030-900 Ponta Grossa, PR (Brazil)

    2015-10-01

    Antiferromagnetic Mn{sub 3}O{sub 4} in spinel structure was investigated employing the Density Functional Theory at different hybrid functionals with default HF exchange percentage. Structural, electronic and magnetic properties were examined. Structural results were in agreement with experimental and Hartree–Fock results showing that the octahedral site was distorted by the Jahn–Teller effect, which changed the electron density distribution. Band-gap results for B3LYP and B3PW hybrid functionals were closer to the experimental when compared to PBE0. Mulliken Population Analysis revealed magnetic moments very close to ideal d{sup 4} and d{sup 5} electron configurations of Mn{sup 3+} and Mn{sup 2+}, respectively. Electron density maps are useful to determine that oxygen atoms mediate the electron transfer between octahedral and tetrahedral clusters. Magnetic properties were investigated from theoretical results for exchange coupling constants. Intratetrahedral and tetra-octahedral interactions were observed to be antiferromagnetic, whereas, octahedral sites presented antiferromagnetic interactions in the same layer and ferromagnetic in adjacent layers. Results showed that only default B3LYP was successful to describe magnetic properties of antiferromagnetic materials in agreement with experimental results. - Highlights: • We study structural, electronic and magnetic properties of antiferromagnetic Mn{sub 3}O{sub 4}. • B3LYP, B3PW and PBE0 hybrid functionals are compared. • B3LYP and B3PW hybrid functionals are better to band-gap calculations. • Only default B3LYP was successful to describe exchange interactions for Mn{sub 3}O{sub 4}.

  5. Numerical analysis of energy density and particle density in high energy heavy-ion collisions

    International Nuclear Information System (INIS)

    Fu Yuanyong; Lu Zhongdao

    2004-01-01

    Energy density and particle density in high energy heavy-ion collisions are calculated with infinite series expansion method and Gauss-Laguerre formulas in numerical integration separately, and the results of these two methods are compared, the higher terms and linear terms in series expansion are also compared. The results show that Gauss-Laguerre formulas is a good method in calculations of high energy heavy-ion collisions. (author)

  6. Applying Authentic Data Analysis in Learning Earth Atmosphere

    Science.gov (United States)

    Johan, H.; Suhandi, A.; Samsudin, A.; Wulan, A. R.

    2017-09-01

    The aim of this research was to develop earth science learning material especially earth atmosphere supported by science research with authentic data analysis to enhance reasoning through. Various earth and space science phenomenon require reasoning. This research used experimental research with one group pre test-post test design. 23 pre-service physics teacher participated in this research. Essay test was conducted to get data about reason ability. Essay test was analyzed quantitatively. Observation sheet was used to capture phenomena during learning process. The results showed that student’s reasoning ability improved from unidentified and no reasoning to evidence based reasoning and inductive/deductive rule-based reasoning. Authentic data was considered using Grid Analysis Display System (GrADS). Visualization from GrADS facilitated students to correlate the concepts and bring out real condition of nature in classroom activity. It also helped student to reason the phenomena related to earth and space science concept. It can be concluded that applying authentic data analysis in learning process can help to enhance students reasoning. This study is expected to help lecture to bring out result of geoscience research in learning process and facilitate student understand concepts.

  7. Utility of the pooling approach as applied to whole genome association scans with high-density Affymetrix microarrays

    Directory of Open Access Journals (Sweden)

    Gray Joanna

    2010-11-01

    Full Text Available Abstract Background We report an attempt to extend the previously successful approach of combining SNP (single nucleotide polymorphism microarrays and DNA pooling (SNP-MaP employing high-density microarrays. Whereas earlier studies employed a range of Affymetrix SNP microarrays comprising from 10 K to 500 K SNPs, this most recent investigation used the 6.0 chip which displays 906,600 SNP probes and 946,000 probes for the interrogation of CNVs (copy number variations. The genotyping assay using the Affymetrix SNP 6.0 array is highly demanding on sample quality due to the small feature size, low redundancy, and lack of mismatch probes. Findings In the first study published so far using this microarray on pooled DNA, we found that pooled cheek swab DNA could not accurately predict real allele frequencies of the samples that comprised the pools. In contrast, the allele frequency estimates using blood DNA pools were reasonable, although inferior compared to those obtained with previously employed Affymetrix microarrays. However, it might be possible to improve performance by developing improved analysis methods. Conclusions Despite the decreasing costs of genome-wide individual genotyping, the pooling approach may have applications in very large-scale case-control association studies. In such cases, our study suggests that high-quality DNA preparations and lower density platforms should be preferred.

  8. Applied research and development of neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Yong Sam; Moon, Jong Hwa; Kim, Sun Ha; Bak, Sung Ryel; Park, Yong Chul; Kim, Young Ki; Chung, Hwan Sung; Park, Kwang Won; Kang, Sang Hun

    2000-05-01

    This report is written for results of research and development as follows : improvement of neutron irradiation facilities, counting system and development of automation system and capsules for NAA in HANARO ; improvement of analytical procedures and establishment of analytical quality control and assurance system; applied research and development of environment, industry and human health and its standardization. For identification and standardization of analytical method, environmental biological samples and polymer are analyzed and uncertainity of measurement are estimated. Also data intercomparison and proficency test were performed. Using airborne particulate matter chosen as a environmental indicators, trace elemental concentrations of sample collected at urban and rural site are determined and then the calculation of statistics and the factor analysis are carried out for investigation of emission source. International cooperation research project was carried out for utilization of nuclear techniques.

  9. Applied research and development of neutron activation analysis

    International Nuclear Information System (INIS)

    Chung, Yong Sam; Moon, Jong Hwa; Kim, Sun Ha; Bak, Sung Ryel; Park, Yong Chul; Kim, Young Ki; Chung, Hwan Sung; Park, Kwang Won; Kang, Sang Hun

    2000-05-01

    This report is written for results of research and development as follows : improvement of neutron irradiation facilities, counting system and development of automation system and capsules for NAA in HANARO ; improvement of analytical procedures and establishment of analytical quality control and assurance system; applied research and development of environment, industry and human health and its standardization. For identification and standardization of analytical method, environmental biological samples and polymer are analyzed and uncertainity of measurement are estimated. Also data intercomparison and proficency test were performed. Using airborne particulate matter chosen as a environmental indicators, trace elemental concentrations of sample collected at urban and rural site are determined and then the calculation of statistics and the factor analysis are carried out for investigation of emission source. International cooperation research project was carried out for utilization of nuclear techniques

  10. Sensitivity analysis for reactivity and power density investigations in nuclear reactors

    International Nuclear Information System (INIS)

    Naguib, K.; Morcos, H.N.; Sallam, O.H.; Abdelsamei, SH.

    1993-01-01

    Sensitivity analysis theory based on the variational functional approaches was applied to evaluate sensitivities of eigenvalues and power densities due to variation of the absorber concentration in the reactor core. The practical usefulness of this method is illustrated by considering test cases. The result indicates that this method is as accurate as those obtained from direct calculations, yet it provides an economical means in saving computational time since it requires fewer calculations. The SARC-1/2 code have been written in Fortran-77 to solve this problem.3 tab. 1 fig

  11. Applying Conjoint Analysis to Study Attitudes of Thai Government Organisations

    Directory of Open Access Journals (Sweden)

    Natee Suriyanon

    2012-11-01

    Full Text Available This article presents the application of choice-based conjointanalysis to analyse the attitude of Thai government organisationstowards the restriction of the contractor’s right to claimcompensation for unfavourable effects from undesirable events.The analysis reveals that the organisations want to restrict only 6out of 14 types of the claiming rights that were studied. The rightthat they want to restrict most is the right to claim for additionaldirect costs due to force majeure. They are willing to pay between0.087% - 0.210% of the total project direct cost for restricting eachtype of contractor right. The total additional cost for restrictingall six types of rights that the organisations are willing to pay is0.882%. The last section of this article applies the knowledgegained from a choice based conjoint analysis experiment to theanalysis of the standard contract of the Thai government. Theanalysis reveals three types of rights where Thai governmentorganisations are willing to forego restrictions, but the presentstandard contract does not grant such rights.

  12. Exploring charge density analysis in crystals at high pressure: data collection, data analysis and advanced modelling.

    Science.gov (United States)

    Casati, Nicola; Genoni, Alessandro; Meyer, Benjamin; Krawczuk, Anna; Macchi, Piero

    2017-08-01

    The possibility to determine electron-density distribution in crystals has been an enormous breakthrough, stimulated by a favourable combination of equipment for X-ray and neutron diffraction at low temperature, by the development of simplified, though accurate, electron-density models refined from the experimental data and by the progress in charge density analysis often in combination with theoretical work. Many years after the first successful charge density determination and analysis, scientists face new challenges, for example: (i) determination of the finer details of the electron-density distribution in the atomic cores, (ii) simultaneous refinement of electron charge and spin density or (iii) measuring crystals under perturbation. In this context, the possibility of obtaining experimental charge density at high pressure has recently been demonstrated [Casati et al. (2016). Nat. Commun. 7, 10901]. This paper reports on the necessities and pitfalls of this new challenge, focusing on the species syn-1,6:8,13-biscarbonyl[14]annulene. The experimental requirements, the expected data quality and data corrections are discussed in detail, including warnings about possible shortcomings. At the same time, new modelling techniques are proposed, which could enable specific information to be extracted, from the limited and less accurate observations, like the degree of localization of double bonds, which is fundamental to the scientific case under examination.

  13. The Evidence-Based Practice of Applied Behavior Analysis.

    Science.gov (United States)

    Slocum, Timothy A; Detrich, Ronnie; Wilczynski, Susan M; Spencer, Trina D; Lewis, Teri; Wolfe, Katie

    2014-05-01

    Evidence-based practice (EBP) is a model of professional decision-making in which practitioners integrate the best available evidence with client values/context and clinical expertise in order to provide services for their clients. This framework provides behavior analysts with a structure for pervasive use of the best available evidence in the complex settings in which they work. This structure recognizes the need for clear and explicit understanding of the strength of evidence supporting intervention options, the important contextual factors including client values that contribute to decision making, and the key role of clinical expertise in the conceptualization, intervention, and evaluation of cases. Opening the discussion of EBP in this journal, Smith (The Behavior Analyst, 36, 7-33, 2013) raised several key issues related to EBP and applied behavior analysis (ABA). The purpose of this paper is to respond to Smith's arguments and extend the discussion of the relevant issues. Although we support many of Smith's (The Behavior Analyst, 36, 7-33, 2013) points, we contend that Smith's definition of EBP is significantly narrower than definitions that are used in professions with long histories of EBP and that this narrowness conflicts with the principles that drive applied behavior analytic practice. We offer a definition and framework for EBP that aligns with the foundations of ABA and is consistent with well-established definitions of EBP in medicine, psychology, and other professions. In addition to supporting the systematic use of research evidence in behavior analytic decision making, this definition can promote clear communication about treatment decisions across disciplines and with important outside institutions such as insurance companies and granting agencies.

  14. High-density polymorphisms analysis of 23 candidate genes for association with bone mineral density.

    Science.gov (United States)

    Giroux, Sylvie; Elfassihi, Latifa; Clément, Valérie; Bussières, Johanne; Bureau, Alexandre; Cole, David E C; Rousseau, François

    2010-11-01

    Osteoporosis is a bone disease characterized by low bone mineral density (BMD), a highly heritable and polygenic trait. Women are more prone than men to develop osteoporosis due to a lower peak bone mass and accelerated bone loss at menopause. Peak bone mass has been convincingly shown to be due to genetic factors with heritability up to 80%. Menopausal bone loss has been shown to have around 38% to 49% heritability depending on the site studied. To have more statistical power to detect small genetic effects we focused on premenopausal women. We studied 23 candidate genes, some involved in calcium and vitamin-D regulation and others because estrogens strongly induced their gene expression in mice where it was correlated with humerus trabecular bone density. High-density polymorphisms were selected to cover the entire gene variability and 231 polymorphisms were genotyped in a first sample of 709 premenopausal women. Positive associations were retested in a second, independent, sample of 673 premenopausal women. Ten polymorphisms remained associated with BMD in the combined samples and one was further associated in a large sample of postmenopausal women (1401 women). This associated polymorphism was located in the gene CSF3R (granulocyte colony stimulating factor receptor) that had never been associated with BMD before. The results reported in this study suggest a role for CSF3R in the determination of bone density in women. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. COMBINATION OF DENSITY AND ENERGY MODULATION IN MICROBUNCHING ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Cheng Ying [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Li, Rui [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-05-01

    Microbunching instability (MBI) has been one of the most challenging issues in the transport of high-brightness electron beams for modern recirculating or energy recovery linac machines. Recently we have developed and implemented a Vlasov solver [1] to calculate the microbunching gain for an arbitrary beamline lattice, based on the extension of existing theoretical formulation [2-4] for the microbunching amplification from an initial density perturbation to the final density modulation. For more thorough analyses, in addition to the case of (initial) density to (final) density amplification, we extend in this paper the previous formulation to more general cases, including energy to density, density to energy and energy to energy amplifications for a recirculation machine. Such semi-analytical formulae are then incorporated into our Vlasov solver, and qualitative agreement is obtained when the semi-analytical Vlasov results are compared with particle tracking simulation using ELEGANT [5].

  16. Density and Structure Analysis of Molten Ni-W Alloys

    Institute of Scientific and Technical Information of China (English)

    Feng XIAO; Liang FANG

    2004-01-01

    Density of molten Ni and Ni-W alloys was measured in the temperature range of 1773~1873 K with a sessile drop method.The density of molten Ni and Ni-W alloys trends to decrease with increasing temperature. The density and molar volume of the alloys trend to increase with increasing W concentration in the alloys. The calculation result shows an ideal mixing of Ni-W alloys.

  17. To apply or not to apply: a survey analysis of grant writing costs and benefits.

    Science.gov (United States)

    von Hippel, Ted; von Hippel, Courtney

    2015-01-01

    We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants and our analyses suggest that funding rates below approximately 20%, commensurate with current NIH and NSF funding, are likely to drive at least half of the active researchers away from federally funded research. We conclude with recommendations and suggestions for individual investigators and for department heads.

  18. To apply or not to apply: a survey analysis of grant writing costs and benefits.

    Directory of Open Access Journals (Sweden)

    Ted von Hippel

    Full Text Available We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants and our analyses suggest that funding rates below approximately 20%, commensurate with current NIH and NSF funding, are likely to drive at least half of the active researchers away from federally funded research. We conclude with recommendations and suggestions for individual investigators and for department heads.

  19. High-density polyethylene dosimetry by transvinylene FTIR analysis

    DEFF Research Database (Denmark)

    McLaughlin, W.L.; Silverman, J.; Al-Sheikhly, M.

    1999-01-01

    and electrons. The useful dose range of 0.053 cm thick high-density polyethylene film (rho = 0.961 g cm(-3); melt index = 0.8 dg min(-1)), for irradiations by (60)Co gamma radiation and 2.0 and 0.4 MeV electron beams in deaerated atmosphere (Na gas), is about 50-10(3) kGy for FTIR transvinylene......The formation of transvinylene unsaturation, -CH=CH-, due to free-radical or cationic-initiated dehydrogenation by irradiation, is a basic reaction in polyethylene and is useful for dosimetry at high absorbed doses. The radiation-enhanced infrared absorption having a maximum at nu = 965 cm......(-l) (lambda = 10.36 mu m) is stable in air and can be measured by Fourier-transform infrared (FTIR) spectrophotometry. The quantitative analysis is a useful means of product end-point dosimetry for radiation processing with gamma rays and electrons, where polyethylene is a component of the processed product...

  20. Rotavirus - Global research density equalizing mapping and gender analysis.

    Science.gov (United States)

    Köster, Corinna; Klingelhöfer, Doris; Groneberg, David A; Schwarzer, Mario

    2016-01-02

    Rotaviruses are the leading reason for dehydration and severe diarrheal disease and in infants and young children worldwide. An increasing number of related publications cause a crucial challenge to determine the relevant scientific output. Therefore, scientometric analyses are helpful to evaluate quantity as well as quality of the worldwide research activities on Rotavirus. Up to now, no in-depth global scientometric analysis relating to Rotavirus publications has been carried out. This study used scientometric tools and the method of density equalizing mapping to visualize the differences of the worldwide research effort referring to Rotavirus. The aim of the study was to compare scientific output geographically and over time by using an in-depth data analysis and New quality and quantity indices in science (NewQIS) tools. Furthermore, a gender analysis was part of the data interpretation. We retrieved all Rotavirus-related articles, which were published on "Rotavirus" during the time period from 1900 to 2013, from the Web of Science by a defined search term. These items were analyzed regarding quantitative and qualitative aspects, and visualized with the help of bibliometric methods and the technique of density equalizing mapping to show the differences of the worldwide research efforts. This work aimed to extend the current NewQIS platform. The 5906 Rotavirus associated articles were published in 138 countries from 1900 to 2013. The USA authored 2037 articles that equaled 34.5% of all published items followed by Japan with 576 articles and the United Kingdom - as the most productive representative of the European countries - with 495 articles. Furthermore, the USA established the most cooperations with other countries and was found to be in the center of an international collaborative network. We performed a gender analysis of authors per country (threshold was set at a publishing output of more than 100 articles by more than 50 authors whose names could be

  1. Applied genre analysis: a multi-perspective model

    Directory of Open Access Journals (Sweden)

    Vijay K Bhatia

    2002-04-01

    Full Text Available Genre analysis can be viewed from two different perspectives: it may be seen as a reflection of the complex realities of the world of institutionalised communication, or it may be seen as a pedagogically effective and convenient tool for the design of language teaching programmes, often situated within simulated contexts of classroom activities. This paper makes an attempt to understand and resolve the tension between these two seemingly contentious perspectives to answer the question: "Is generic description a reflection of reality, or a convenient fiction invented by applied linguists?". The paper also discusses issues related to the nature and use of linguistic description in a genre-based educational enterprise, claiming that instead of using generic descriptions as models for linguistic reproduction of conventional forms to respond to recurring social contexts, as is often the case in many communication based curriculum contexts, they can be used as analytical resource to understand and manipulate complex inter-generic and multicultural realisations of professional discourse, which will enable learners to use generic knowledge to respond to novel social contexts and also to create new forms of discourse to achieve pragmatic success as well as other powerful human agendas.

  2. SUCCESS CONCEPT ANALYSIS APPLIED TO THE INFORMATION TECHNOLOGY PROJECT MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Cassio C. Montenegro Duarte

    2012-05-01

    Full Text Available This study evaluates the concept of success in project management that is applicable to the IT universe, from the classical theory associated with the techniques of project management. Therefore, it applies the theoretical analysis associated to the context of information technology in enterprises as well as the classic literature of traditional project management, focusing on its application in business information technology. From the literature developed in the first part of the study, four propositions were prepared for study which formed the basis for the development of the field research with three large companies that develop projects of Information Technology. The methodology used in the study predicted the development of the multiple case study. Empirical evidence suggests that the concept of success found in the classical literature in project management adjusts to the environment management of IT projects. Showed that it is possible to create the model of standard IT projects in order to replicate it in future derivatives projects, which depends on the learning acquired at the end of a long and continuous process and sponsorship of senior management, which ultimately results in its merger into the company culture.

  3. Applying importance-performance analysis to patient safety culture.

    Science.gov (United States)

    Lee, Yii-Ching; Wu, Hsin-Hung; Hsieh, Wan-Lin; Weng, Shao-Jen; Hsieh, Liang-Po; Huang, Chih-Hsuan

    2015-01-01

    The Sexton et al.'s (2006) safety attitudes questionnaire (SAQ) has been widely used to assess staff's attitudes towards patient safety in healthcare organizations. However, to date there have been few studies that discuss the perceptions of patient safety both from hospital staff and upper management. The purpose of this paper is to improve and to develop better strategies regarding patient safety in healthcare organizations. The Chinese version of SAQ based on the Taiwan Joint Commission on Hospital Accreditation is used to evaluate the perceptions of hospital staff. The current study then lies in applying importance-performance analysis technique to identify the major strengths and weaknesses of the safety culture. The results show that teamwork climate, safety climate, job satisfaction, stress recognition and working conditions are major strengths and should be maintained in order to provide a better patient safety culture. On the contrary, perceptions of management and hospital handoffs and transitions are important weaknesses and should be improved immediately. Research limitations/implications - The research is restricted in generalizability. The assessment of hospital staff in patient safety culture is physicians and registered nurses. It would be interesting to further evaluate other staff's (e.g. technicians, pharmacists and others) opinions regarding patient safety culture in the hospital. Few studies have clearly evaluated the perceptions of healthcare organization management regarding patient safety culture. Healthcare managers enable to take more effective actions to improve the level of patient safety by investigating key characteristics (either strengths or weaknesses) that healthcare organizations should focus on.

  4. Applying multicriteria analysis for choosing the best marination for pork

    Directory of Open Access Journals (Sweden)

    Nieto VMOS

    2015-01-01

    Full Text Available Objective. This research aimed to choose a best marination solution using the Analytic Hierarchy Process (AHP. Materials and methods. Pork meat samples were collected in a commercial slaughterhouse, and they were randomly distributed in four treatments with three different salt contents blend. Color, pH, retention of the solution, exudate and cooking loss, shear force and sensory attributes were assessed and evaluated. Multicriteria analysis using AHP was applied to the results in order to choose the best overall marination solution. Criteria used for selection were the physical and sensory characteristics of meat, and based on these criteria were classified solutions marination. Results. Results showed that the combination of the salts was the best alternative (Na2CO3+NaCl+Na5P3O10, followed by the solutions of (Na2CO3 + NaCl, and (Na5P3O10 + NaCl. Conclusions. All tested solutions with the salts used alone or in combination led to better physical and sensory attributes than the meat not marinated.

  5. Possibility of applying the gamma-gamma method to the in situ determination of uranium-ore densities

    International Nuclear Information System (INIS)

    Czubek, J.; Guitton, J.

    1965-01-01

    The principles of the gamma-gamma method are reviewed. It is shown in particular that, under certain conditions, the method makes it possible to obtain a representative measurement of the electronic density. Chemical analyses have been carried out on samples obtained from uranium deposits. The results show that an exact correlation exists between the massive and electronic densities. It is possible to consider the possibility of measuring the density of uranium-containing rocks by the gamma-gamma method. (authors) [fr

  6. Geospatial Analysis of Pediatric EMS Run Density and Endotracheal Intubation

    Directory of Open Access Journals (Sweden)

    Matthew Hansen

    2016-09-01

    Full Text Available Introduction: The association between geographic factors, including transport distance, and pediatric emergency medical services (EMS run clustering on out-of-hospital pediatric endotracheal intubation is unclear. The objective of this study was to determine if endotracheal intubation procedures are more likely to occur at greater distances from the hospital and near clusters of pediatric calls. Methods: This was a retrospective observational study including all EMS runs for patients less than 18 years of age from 2008 to 2014 in a geographically large and diverse Oregon county that includes densely populated urban areas near Portland and remote rural areas. We geocoded scene addresses using the automated address locator created in the cloud-based mapping platform ArcGIS, supplemented with manual address geocoding for remaining cases. We then use the Getis-Ord Gi spatial statistic feature in ArcGIS to map statistically significant spatial clusters (hot spots of pediatric EMS runs throughout the county. We then superimposed all intubation procedures performed during the study period on maps of pediatric EMS-run hot spots, pediatric population density, fire stations, and hospitals. We also performed multivariable logistic regression to determine if distance traveled to the hospital was associated with intubation after controlling for several confounding variables. Results: We identified a total of 7,797 pediatric EMS runs during the study period and 38 endotracheal intubations. In univariate analysis we found that patients who were intubated were similar to those who were not in gender and whether or not they were transported to a children’s hospital. Intubated patients tended to be transported shorter distances and were older than non-intubated patients. Increased distance from the hospital was associated with reduced odds of intubation after controlling for age, sex, scene location, and trauma system entry status in a multivariate logistic

  7. Numerical analysis of wet separation of particles by density differences

    Science.gov (United States)

    Markauskas, D.; Kruggel-Emden, H.

    2017-07-01

    Wet particle separation is widely used in mineral processing and plastic recycling to separate mixtures of particulate materials into further usable fractions due to density differences. This work presents efforts aiming to numerically analyze the wet separation of particles with different densities. In the current study the discrete element method (DEM) is used for the solid phase while the smoothed particle hydrodynamics (SPH) is used for modeling of the liquid phase. The two phases are coupled by the use of a volume averaging technique. In the current study, simulations of spherical particle separation were performed. In these simulations, a set of generated particles with two different densities is dropped into a rectangular container filled with liquid. The results of simulations with two different mixtures of particles demonstrated how separation depends on the densities of particles.

  8. Can Link Analysis Be Applied to Identify Behavioral Patterns in Train Recorder Data?

    Science.gov (United States)

    Strathie, Ailsa; Walker, Guy H

    2016-03-01

    A proof-of-concept analysis was conducted to establish whether link analysis could be applied to data from on-train recorders to detect patterns of behavior that could act as leading indicators of potential safety issues. On-train data recorders capture data about driving behavior on thousands of routine journeys every day and offer a source of untapped data that could be used to offer insights into human behavior. Data from 17 journeys undertaken by six drivers on the same route over a 16-hr period were analyzed using link analysis, and four key metrics were examined: number of links, network density, diameter, and sociometric status. The results established that link analysis can be usefully applied to data captured from on-vehicle recorders. The four metrics revealed key differences in normal driver behavior. These differences have promising construct validity as leading indicators. Link analysis is one method that could be usefully applied to exploit data routinely gathered by on-vehicle data recorders. It facilitates a proactive approach to safety based on leading indicators, offers a clearer understanding of what constitutes normal driving behavior, and identifies trends at the interface of people and systems, which is currently a key area of strategic risk. These research findings have direct applications in the field of transport data monitoring. They offer a means of automatically detecting patterns in driver behavior that could act as leading indicators of problems during operation and that could be used in the proactive monitoring of driver competence, risk management, and even infrastructure design. © 2015, Human Factors and Ergonomics Society.

  9. New trends in applied harmonic analysis sparse representations, compressed sensing, and multifractal analysis

    CERN Document Server

    Cabrelli, Carlos; Jaffard, Stephane; Molter, Ursula

    2016-01-01

    This volume is a selection of written notes corresponding to courses taught at the CIMPA School: "New Trends in Applied Harmonic Analysis: Sparse Representations, Compressed Sensing and Multifractal Analysis". New interactions between harmonic analysis and signal and image processing have seen striking development in the last 10 years, and several technological deadlocks have been solved through the resolution of deep theoretical problems in harmonic analysis. New Trends in Applied Harmonic Analysis focuses on two particularly active areas that are representative of such advances: multifractal analysis, and sparse representation and compressed sensing. The contributions are written by leaders in these areas, and covers both theoretical aspects and applications. This work should prove useful not only to PhD students and postdocs in mathematics and signal and image processing, but also to researchers working in related topics.

  10. Quantitative XRD analysis of {110} twin density in biotic aragonites.

    Science.gov (United States)

    Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Nagasawa, Hiromichi; Kogure, Toshihiro

    2012-12-01

    {110} Twin densities in biotic aragonite have been estimated quantitatively from the peak widths of specific reflections in powder X-ray diffraction (XRD) patterns, as well as direct confirmation of the twins using transmission electron microscopy (TEM). Influence of the twin density on the peak widths in the XRD pattern was simulated using DIFFaX program, regarding (110) twin as interstratification of two types of aragonite unit layers with mirrored relationship. The simulation suggested that the twin density can be estimated from the difference of the peak widths between 111 and 021, or between 221 and 211 reflections. Biotic aragonite in the crossed-lamellar microstructure (three species) and nacreous microstructure (four species) of molluscan shells, fish otoliths (two species), and a coral were investigated. The XRD analyses indicated that aragonite crystals in the crossed-lamellar microstructure of the three species contain high density of the twins, which is consistent with the TEM examination. On the other hand, aragonite in the nacre of the four species showed almost no difference of the peak widths between the paired reflections, indicating low twin densities. The results for the fish otoliths were varied between the species. Such variation of the twin density in biotic aragonites may reflect different schemes of crystal growth in biomineralization. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. THE TURBULENCE SPECTRUM OF MOLECULAR CLOUDS IN THE GALACTIC RING SURVEY: A DENSITY-DEPENDENT PRINCIPAL COMPONENT ANALYSIS CALIBRATION

    International Nuclear Information System (INIS)

    Roman-Duval, Julia; Jackson, James; Federrath, Christoph; Klessen, Ralf S.; Brunt, Christopher; Heyer, Mark

    2011-01-01

    Turbulence plays a major role in the formation and evolution of molecular clouds. Observationally, turbulent velocities are convolved with the density of an observed region. To correct for this convolution, we investigate the relation between the turbulence spectrum of model clouds, and the statistics of their synthetic observations obtained from principal component analysis (PCA). We apply PCA to spectral maps generated from simulated density and velocity fields, obtained from hydrodynamic simulations of supersonic turbulence, and from fractional Brownian motion (fBm) fields with varying velocity, density spectra, and density dispersion. We examine the dependence of the slope of the PCA pseudo-structure function, α PCA , on intermittency, on the turbulence velocity (β v ) and density (β n ) spectral indexes, and on density dispersion. We find that PCA is insensitive to β n and to the log-density dispersion σ s , provided σ s ≤ 2. For σ s > 2, α PCA increases with σ s due to the intermittent sampling of the velocity field by the density field. The PCA calibration also depends on intermittency. We derive a PCA calibration based on fBm structures with σ s ≤ 2 and apply it to 367 13 CO spectral maps of molecular clouds in the Galactic Ring Survey. The average slope of the PCA structure function, (α PCA ) = 0.62 ± 0.2, is consistent with the hydrodynamic simulations and leads to a turbulence velocity exponent of (β v ) = 2.06 ± 0.6 for a non-intermittent, low density dispersion flow. Accounting for intermittency and density dispersion, the coincidence between the PCA slope of the GRS clouds and the hydrodynamic simulations suggests β v ≅ 1.9, consistent with both Burgers and compressible intermittent turbulence.

  12. Current status of neutron activation analysis and applied nuclear chemistry

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1990-01-01

    A review of recent scientometric studies of citations and publication data shows the present state of NAA and applied nuclear chemistry as compared to other analytical techniques. (author) 9 refs.; 7 tabs

  13. A performance analysis for MHD power cycles operating at maximum power density

    International Nuclear Information System (INIS)

    Sahin, Bahri; Kodal, Ali; Yavuz, Hasbi

    1996-01-01

    An analysis of the thermal efficiency of a magnetohydrodynamic (MHD) power cycle at maximum power density for a constant velocity type MHD generator has been carried out. The irreversibilities at the compressor and the MHD generator are taken into account. The results obtained from power density analysis were compared with those of maximum power analysis. It is shown that by using the power density criteria the MHD cycle efficiency can be increased effectively. (author)

  14. Comparative analysis of human and bovine teeth: radiographic density

    Directory of Open Access Journals (Sweden)

    Jefferson Luis Oshiro Tanaka

    2008-12-01

    Full Text Available Since bovine teeth have been used as substitutes for human teeth in in vitro dental studies, the aim of this study was to compare the radiographic density of bovine teeth with that of human teeth to evaluate their usability for radiographic studies. Thirty bovine and twenty human teeth were cut transversally in 1 millimeter-thick slices. The slices were X-rayed using a digital radiographic system and an intraoral X-ray machine at 65 kVp and 7 mA. The exposure time (0.08 s and the target-sensor distance (40 cm were standardized for all the radiographs. The radiographic densities of the enamel, coronal dentin and radicular dentin of each slice were obtained separately using the "histogram" tool of Adobe Photoshop 7.0 software. The mean radiographic densities of the enamel, coronal dentin and radicular dentin were calculated by the arithmetic mean of the slices of each tooth. One-way ANOVA demonstrated statistically significant differences for the densities of bovine and human enamel (p 0.05. Based on the results, the authors concluded that: a the radiographic density of bovine enamel is significantly higher than that of human enamel; b the radiodensity of bovine coronal dentin is statistically lower than the radiodensity of human coronal dentin; bovine radicular dentin is also less radiodense than human radicular dentin, although this difference was not statistically significant; c bovine teeth should be used with care in radiographic in vitro studies.

  15. Thermal Analysis of Low Layer Density Multilayer Insulation Test Results

    Science.gov (United States)

    Johnson, Wesley L.

    2011-01-01

    Investigation of the thermal performance of low layer density multilayer insulations is important for designing long-duration space exploration missions involving the storage of cryogenic propellants. Theoretical calculations show an analytical optimal layer density, as widely reported in the literature. However, the appropriate test data by which to evaluate these calculations have been only recently obtained. As part of a recent research project, NASA procured several multilayer insulation test coupons for calorimeter testing. These coupons were configured to allow for the layer density to be varied from 0.5 to 2.6 layer/mm. The coupon testing was completed using the cylindrical Cryostat-l00 apparatus by the Cryogenics Test Laboratory at Kennedy Space Center. The results show the properties of the insulation as a function of layer density for multiple points. Overlaying these new results with data from the literature reveals a minimum layer density; however, the value is higher than predicted. Additionally, the data show that the transition region between high vacuum and no vacuum is dependent on the spacing of the reflective layers. Historically this spacing has not been taken into account as thermal performance was calculated as a function of pressure and temperature only; however the recent testing shows that the data is dependent on the Knudsen number which takes into account pressure, temperature, and layer spacing. These results aid in the understanding of the performance parameters of MLI and help to complete the body of literature on the topic.

  16. A longitudinal analysis of alcohol outlet density and domestic violence.

    Science.gov (United States)

    Livingston, Michael

    2011-05-01

    A small number of studies have identified a positive relationship between alcohol outlet density and domestic violence. These studies have all been based on cross-sectional data and have been limited to the assessment of ecological correlations between outlet density and domestic violence rates. This study provides the first longitudinal examination of this relationship. Cross-sectional time-series using aggregated data from small areas. The relationships between alcohol outlet density and domestic violence were assessed over time using a fixed-effects model. Controls for the spatial autocorrelation of the data were included in the model. The study uses data for 186 postcodes from within the metropolitan area of Melbourne, Australia for the years 1996 to 2005. Alcohol outlet density measures for three different types of outlets (hotel/pub, packaged liquor, on-premise) were derived from liquor licensing records and domestic violence rates were calculated from police-recorded crime data, based on the victim's postcode. Alcohol outlet density was associated significantly with rates of domestic violence, over time. All three licence categories were positively associated with domestic violence rates, with small effects for general (pub) and on-premise licences and a large effect for packaged liquor licences. In Melbourne, the density of liquor licences is positively associated with rates of domestic violence over time. The effects were particularly large for packaged liquor outlets, suggesting a need for licensing policies that pay more attention to o off-premise alcohol availability. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.

  17. RISK ANALYSIS APPLIED IN OIL EXPLORATION AND PRODUCTION

    African Journals Online (AJOL)

    ES Obe

    aDepartment of Civil Engineering, University of Nigeria, Nsukka, Enugu State, Nigeria. ... The analysis in this work is ... risk analysis, oil field, risk management, projects, investment opportunity. 1. .... own merit but since the company has limited.

  18. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  19. Applying thematic analysis theory to practice: a researcher's experience.

    Science.gov (United States)

    Tuckett, Anthony G

    2005-01-01

    This article describes an experience of thematic analysis. In order to answer the question 'What does analysis look like in practice?' it describes in brief how the methodology of grounded theory, the epistemology of social constructionism, and the theoretical stance of symbolic interactionism inform analysis. Additionally, analysis is examined by evidencing the systematic processes--here termed organising, coding, writing, theorising, and reading--that led the researcher to develop a final thematic schema.

  20. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    Science.gov (United States)

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  1. RELATIONSHIPS BETWEEN ANATOMICAL FEATURES AND INTRA-RING WOOD DENSITY PROFILES IN Gmelina arborea APPLYING X-RAY DENSITOMETRY

    Directory of Open Access Journals (Sweden)

    Mario Tomazelo-Filho

    2007-12-01

    Full Text Available Four annual tree-rings (2 of juvenile wood and 2 of mature wood were sampled from fast-growth plantations ofGmelina arborea in two climatic conditions (dry and wet tropical in Costa Rica. Each annual tree-ring was divided in equal parts ina radial direction. For each part, X-ray density as well as vessel percentage, length and width fiber, cell wall thickness and lumendiameter were measured. Wood density and profile patterns of cell dimension demonstrated inconsistency between juvenile andmature wood and climatic conditions. The Pearson correlation matrix showed that intra-ring wood density was positively correlatedwith the cell wall thickness and negatively correlated with vessel percentage, fiber length, lumen diameter and width. The forwardstepwise regressions determined that: (i intra-ring wood density variation could be predicted from 76 to 96% for anatomicalvariation; (ii cell wall thickness was the most important anatomical feature to produce intra-ring wood density variation and (iii thevessel percentage, fiber length, lumen diameter and width were the second most statically significant characteristics to intra-ring wooddensity, however, with low participation of the determination coefficient of stepwise regressions.

  2. Measurement uncertainty analysis techniques applied to PV performance measurements

    International Nuclear Information System (INIS)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results

  3. Automated Proposition Density Analysis for Discourse in Aphasia

    Science.gov (United States)

    Fromm, Davida; Greenhouse, Joel; Hou, Kaiyue; Russell, G. Austin; Cai, Xizhen; Forbes, Margaret; Holland, Audrey; MacWhinney, Brian

    2016-01-01

    Purpose: This study evaluates how proposition density can differentiate between persons with aphasia (PWA) and individuals in a control group, as well as among subtypes of aphasia, on the basis of procedural discourse and personal narratives collected from large samples of participants. Method: Participants were 195 PWA and 168 individuals in a…

  4. Analysis of bone mineral density of human bones for strength ...

    Indian Academy of Sciences (India)

    The bone density (BMD) is a medical term normally referring to the amount of mineral matter per square centimetre of bones. Twenty-five patients (18 female and 7 male patients with a mean age of 71.3 years) undergoing both lumbar spine DXA scans and computed tomography imaging were evaluated to determine if HU ...

  5. Low-density lipoprotein analysis in microchip capillary electrophoresis systems

    NARCIS (Netherlands)

    Ceriotti, Laura; Shibata, Takayuki; Folmer, Britta; Weiller, Bruce H.; Roberts, Matthew A.; De Rooij, Nico F.; Verpoorte, Elisabeth

    2002-01-01

    Due to the mounting evidence for altered lipoprotein and cholesterol-lipoprotein content in several disease states, there has been an increasing interest in analytical methods for lipoprotein profiling for diagnosis. The separation of low- and high-density lipoproteins (LDL and HDL, respectively)

  6. Crowd Analysis by Using Optical Flow and Density Based Clustering

    DEFF Research Database (Denmark)

    Santoro, Francesco; Pedro, Sergio; Tan, Zheng-Hua

    2010-01-01

    In this paper, we present a system to detect and track crowds in a video sequence captured by a camera. In a first step, we compute optical flows by means of pyramidal Lucas-Kanade feature tracking. Afterwards, a density based clustering is used to group similar vectors. In the last step...

  7. Energy density functional analysis of shape coexistence in 44S

    International Nuclear Information System (INIS)

    Li, Z. P.; Yao, J. M.; Vretenar, D.; Nikšić, T.; Meng, J.

    2012-01-01

    The structure of low-energy collective states in the neutron-rich nucleus 44 S is analyzed using a microscopic collective Hamiltonian model based on energy density functionals (EDFs). The calculated triaxial energy map, low-energy spectrum and corresponding probability distributions indicate a coexistence of prolate and oblate shapes in this nucleus.

  8. Chemical bonding and charge density distribution analysis of ...

    Indian Academy of Sciences (India)

    tice and the electron density distributions in the unit cell of the samples were investigated. Structural ... titanium and oxygen ions and predominant ionic nature between barium and oxygen ions. Average grain sizes ... trations (at <1%) is responsible for the formation of .... indicated by dots and calculated powder patterns are.

  9. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  10. A density model based on the Modified Quasichemical Model and applied to the (NaCl + KCl + ZnCl2) liquid

    International Nuclear Information System (INIS)

    Ouzilleau, Philippe; Robelin, Christian; Chartrand, Patrice

    2012-01-01

    Highlights: ► A model for the density of multicomponent inorganic liquids. ► The density model is based on the Modified Quasichemical Model. ► Application to the (NaCl + KCl + ZnCl 2 ) ternary liquid. ► A Kohler–Toop-like asymmetric interpolation method was used. - Abstract: A theoretical model for the density of multicomponent inorganic liquids based on the Modified Quasichemical Model has been presented previously. By introducing in the Gibbs free energy of the liquid phase temperature-dependent molar volume expressions for the pure components and pressure-dependent excess parameters for the binary (and sometimes higher-order) interactions, it is possible to reproduce, and eventually predict, the molar volume and the density of the multicomponent liquid phase using standard interpolation methods. In the present article, this density model is applied to the (NaCl + KCl + ZnCl 2 ) ternary liquid and a Kohler–Toop-like asymmetric interpolation method is used. All available density data for the (NaCl + KCl + ZnCl 2 ) liquid were collected and critically evaluated, and optimized pressure-dependent model parameters have been found. This new volumetric model can be used with Gibbs free energy minimization software, to calculate the molar volume and the density of (NaCl + KCl + ZnCl 2 ) ternary melts.

  11. How Has Applied Behavior Analysis and Behavior Therapy Changed?: An Historical Analysis of Journals

    Science.gov (United States)

    O'Donohue, William; Fryling, Mitch

    2007-01-01

    Applied behavior analysis and behavior therapy are now nearly a half century old. It is interesting to ask if and how these disciplines have changed over time, particularly regarding some of their key internal controversies (e.g., role of cognitions). We examined the first five years and the 2000-2004 five year period of the "Journal of Applied…

  12. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  13. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  14. Analysis of the Nonlinear Density Wave Two-Phase Instability in a Steam Generator of 600MWe Liquid Metal Reactor

    International Nuclear Information System (INIS)

    Choi, Seok Ki; Kim, Seong O

    2011-01-01

    A 600 MWe demonstration reactor being developed at KAERI employs a once-through helically coiled steam generator. The helically coiled steam generator is compact and is efficient for heat transfer, however, it may suffer from the two-phase instability. It is well known that the density wave instability is the main source of instability among various types of instabilities in a helically coiled S/G in a LMR. In the present study a simple method for analysis of the density wave two phase instability in a liquid metal reactor S/G is proposed and the method is applied to the analysis of density wave instability in a S/G of 600MWe liquid metal reactor

  15. Characterizing Bonding Patterns in Diradicals and Triradicals by Density-Based Wave Function Analysis: A Uniform Approach.

    Science.gov (United States)

    Orms, Natalie; Rehn, Dirk R; Dreuw, Andreas; Krylov, Anna I

    2018-02-13

    Density-based wave function analysis enables unambiguous comparisons of the electronic structure computed by different methods and removes ambiguity of orbital choices. We use this tool to investigate the performance of different spin-flip methods for several prototypical diradicals and triradicals. In contrast to previous calibration studies that focused on energy gaps between high- and low spin-states, we focus on the properties of the underlying wave functions, such as the number of effectively unpaired electrons. Comparison of different density functional and wave function theory results provides insight into the performance of the different methods when applied to strongly correlated systems such as polyradicals. We show that canonical molecular orbitals for species like large copper-containing diradicals fail to correctly represent the underlying electronic structure due to highly non-Koopmans character, while density-based analysis of the same wave function delivers a clear picture of the bonding pattern.

  16. Heisenberg principle applied to the analysis of speckle interferometry fringes

    Science.gov (United States)

    Sciammarella, C. A.; Sciammarella, F. M.

    2003-11-01

    Optical techniques that are used to measure displacements utilize a carrier. When a load is applied the displacement field modulates the carrier. The accuracy of the information that can be recovered from the modulated carrier is limited by a number of factors. In this paper, these factors are analyzed and conclusions concerning the limitations in information recovery are illustrated with examples taken from experimental data.

  17. Concluding Essay: On Applied Linguistics and Discourse Analysis.

    Science.gov (United States)

    Kaplan, Robert B.

    1990-01-01

    Discusses trends and problems in regarding discourse analysis as a viable paradigm that can govern research, focusing on such issues as the wide diversity and variety of research that can be considered discourse analysis, the predominant focus on English language, research approaches, and undefined variables affecting research outcomes. (seven…

  18. Time-dependent reduced density matrix functional theory applied to laser-driven, correlated two-electron dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Brics, Martins; Kapoor, Varun; Bauer, Dieter [Institut fuer Physik, Universitaet Rostock, 18051 Rostock (Germany)

    2013-07-01

    Time-dependent density functional theory (TDDFT) with known and practicable exchange-correlation potentials does not capture highly correlated electron dynamics such as single-photon double ionization, autoionization, or nonsequential ionization. Time-dependent reduced density matrix functional theory (TDRDMFT) may remedy these problems. The key ingredients in TDRDMFT are the natural orbitals (NOs), i.e., the eigenfunctions of the one-body reduced density matrix (1-RDM), and the occupation numbers (OCs), i.e., the respective eigenvalues. The two-body reduced density matrix (2-RDM) is then expanded in NOs, and equations of motion for the NOs can be derived. If the expansion coefficients of the 2-RDM were known exactly, the problem at hand would be solved. In practice, approximations have to be made. We study the prospects of TDRDMFT following a top-down approach. We solve the exact two-electron time-dependent Schroedinger equation for a model Helium atom in intense laser fields in order to study highly correlated phenomena such as the population of autoionizing states or single-photon double ionization. From the exact wave function we calculate the exact NOs, OCs, the exact expansion coefficients of the 2-RDM, and the exact potentials in the equations of motion. In that way we can identify how many NOs and which level of approximations are necessary to capture such phenomena.

  19. Long-range analysis of density fitting in extended systems

    Science.gov (United States)

    Varga, Scarontefan

    Density fitting scheme is analyzed for the Coulomb problem in extended systems from the correctness of long-range behavior point of view. We show that for the correct cancellation of divergent long-range Coulomb terms it is crucial for the density fitting scheme to reproduce the overlap matrix exactly. It is demonstrated that from all possible fitting metric choices the Coulomb metric is the only one which inherently preserves the overlap matrix for infinite systems with translational periodicity. Moreover, we show that by a small additional effort any non-Coulomb metric fit can be made overlap-preserving as well. The problem is analyzed for both ordinary and Poisson basis set choices.

  20. Limit cycle analysis of nuclear coupled density wave oscillations

    International Nuclear Information System (INIS)

    Ward, M.E.

    1985-01-01

    An investigation of limit cycle behavior for the nuclear-coupled density wave oscillation (NCDWO) in a boiling water reactor (BWR) was performed. A simplified nonlinear model of BWR core behavior was developed using a two-region flow channel representation, coupled with a form of the point-kinetics equation. This model has been used to investigate the behavior of large amplitude NCDWO's through conventional time-integration solutions and through application of a direct relaxation-oscillation limit cycle solution in phase space. The numerical solutions demonstrate the potential for severe global power and flow oscillations in a BWR core at off-normal conditions, such as might occur during Anticipated Transients without Scram. Because of the many simplifying assumptions used, it is felt that the results should not be interpreted as an absolute prediction of core behavior, but as an indication of the potential for large oscillations and a demonstration of the corresponding limit cycle mechanisms. The oscillations in channel density drive the core power variations, and are reinforced by heat flux variations due to the changing fuel temperature. A global temperature increase occurs as energy is accumulated in the fuel, and limits the magnitude of the oscillations because as the average channel density decreases, the amplitude and duration of positive void reactivity at a given oscillation amplitude is lessened

  1. Differences observed in the surface morphology and microstructure of Ni-Fe-Cu ternary thin films electrochemically deposited at low and high applied current densities

    International Nuclear Information System (INIS)

    Sarac, U; Kaya, M; Baykul, M C

    2016-01-01

    In this research, nanocrystalline Ni-Fe-Cu ternary thin films using electrochemical deposition technique were produced at low and high applied current densities onto Indium Tin Oxide (ITO) coated conducting glass substrates. Change of surface morphology and microstructural properties of the films were investigated. Energy dispersive X-ray spectroscopy (EDX) measurements showed that the Ni-Fe-Cu ternary thin films exhibit anomalous codeposition behaviour during the electrochemical deposition process. From the X-ray diffraction (XRD) analyses, it was revealed that there are two segregated phases such as Cu- rich and Ni-rich within the films. The crystallographic structure of the films was face-centered cubic (FCC). It was also observed that the film has lower lattice micro-strain and higher texture degree at high applied current density. Scanning electron microscopy (SEM) studies revealed that the films have rounded shape particles on the base part and cauliflower-like structures on the upper part. The film electrodeposited at high current density had considerably smaller rounded shape particles and cauliflower-like structures. From the atomic force microscopy (AFM) analyses, it was shown that the film deposited at high current density has smaller particle size and surface roughness than the film grown at low current density. (paper)

  2. Applying causal mediation analysis to personality disorder research.

    Science.gov (United States)

    Walters, Glenn D

    2018-01-01

    This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Signed directed social network analysis applied to group conflict

    DEFF Research Database (Denmark)

    Zheng, Quan; Skillicorn, David; Walther, Olivier

    2015-01-01

    Real-world social networks contain relationships of multiple different types, but this richness is often ignored in graph-theoretic modelling. We show how two recently developed spectral embedding techniques, for directed graphs (relationships are asymmetric) and for signed graphs (relationships...... are both positive and negative), can be combined. This combination is particularly appropriate for intelligence, terrorism, and law enforcement applications. We illustrate by applying the novel embedding technique to datasets describing conflict in North-West Africa, and show how unusual interactions can...

  4. Applied risk analysis to the future Brazilian electricity generation matrix

    Energy Technology Data Exchange (ETDEWEB)

    Maues, Jair; Fernandez, Eloi; Correa, Antonio

    2010-09-15

    This study compares energy conversion systems for the generation of electrical power, with an emphasis on the Brazilian energy matrix. The financial model applied in this comparison is based on the Portfolio Theory, developed by Harry Markowitz. The risk-return ratio related to the electrical generation mix predicted in the National Energy Plan - 2030, published in 2006 by the Brazilian Energy Research Office, is evaluated. The increase of non-traditional renewable energy in this expected electrical generating mix, specifically, residues of sugar cane plantations and wind energy, reduce not only the risk but also the average cost of the kilowatt-hour generated.

  5. Applied network security monitoring collection, detection, and analysis

    CERN Document Server

    Sanders, Chris

    2013-01-01

    Applied Network Security Monitoring is the essential guide to becoming an NSM analyst from the ground up. This book takes a fundamental approach to NSM, complete with dozens of real-world examples that teach you the key concepts of NSM. Network security monitoring is based on the principle that prevention eventually fails. In the current threat landscape, no matter how much you try, motivated attackers will eventually find their way into your network. At that point, it is your ability to detect and respond to that intrusion that can be the difference between a small incident and a major di

  6. Thermoeconomic analysis applied to an alternative wastewater treatment

    Energy Technology Data Exchange (ETDEWEB)

    Lamas, Wendell de Queiroz [University of Taubate, Post-graduate Programme in Mechanical Engineering, Department of Mechanical Engineering, Sp (Brazil); Sao Paulo State University, Faculty of Engineering, Campus of Guaratingueta, Postgraduate Programme in Mechanical Engineering, Sp (Brazil); Silveira, Jose Luz; Mattos dos Reis, Luiz Octavio [Sao Paulo State University, Faculty of Engineering, Campus of Guaratingueta, Postgraduate Programme in Mechanical Engineering, Sp (Brazil); Oscare Giacaglia, Giorgio Eugenio [University of Taubate, Post-graduate Programme in Mechanical Engineering, Department of Mechanical Engineering, Sp (Brazil)

    2010-10-15

    This work develops a methodology for the determination of costs associated to products generated in a small wastewater treatment station. The methodology begins with plant units identification, relating their fluid and thermodynamics features to each point indicated in its process diagram. Following, a functional diagram and a formulation are developed in exergetic basis, describing all equations for these points, which are the constraints for optimisation and are used to determine costs associated to products generated in a Small Wastewater Treatment Station - SWTS. The methodology is applied to a hypothetical system based on SWTS plants and presents consistent results when compared to values based on previous experiments and evaluations. (author)

  7. Applying real options analysis to assess cleaner energy development strategies

    International Nuclear Information System (INIS)

    Cheng, Ching-Tsung; Lo, Shang-Lien; Lin, Tyrone T.

    2011-01-01

    The energy industry, accounts for the largest portion of CO 2 emissions, is facing the issue of compliance with the national clean energy policy. The methodology for evaluating the energy mix policy is crucial because of the characteristics of lead time embedded with the power generation facilities investment and the uncertainty of future electricity demand. In this paper, a modified binomial model based on sequential compound options, which may account for the lead time and uncertainty as a whole is established, and a numerical example on evaluating the optional strategies and the strategic value of the cleaner energy policy is also presented. It is found that the optimal decision at some nodes in the binomial tree is path dependent, which is different from the standard sequential compound option model with lead time or time lag concept. The proposed modified binomial sequential compound real options model can be generalized and extensively applied to solve the general decision problems that deal with the long lead time of many government policies as well as capital intensive investments. - Highlights: → Introducing a flexible strategic management approach for government policy making. → Developing a modified binomial real options model based on sequential compound options. → Proposing an innovative model for managing the long term policy with lead time. → Applying to evaluate the options of various scenarios of cleaner energy strategies.

  8. Power density investigation on the press-pack IGBT 3L-HB-VSCs applied to large wind turbine

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Munk-Nielsen, Stig; Teodorescu, Remus

    2011-01-01

    capabilities, DC capacitor sizes, converter cabinet volumes of the three 3LHB- VSCs utilizing press-pack IGBTs are investigated in order to quantify and compare the power densities of the 3L-HB-VSCs employed as grid-side converters. Also, the suitable transformer types for the 3L-HB-VSCs are determined......With three different DC-side and AC-side connections, the three-level H-bridge voltage source converters (3L-HB-VSCs) are alternatives to 3L neutral-point-clamped VSCs (3L-NPC-VSCs) for interfacing large wind turbines with electricity grids. In order to assess their feasibility for large wind...... turbines, they should be investigated in terms of power density, which is one of the most important design criteria for wind turbine converters due to turbine nacelle space limitation. In this study, by means of the converter electro-thermal models based on the converter characteristics, the power...

  9. Linear and nonlinear analysis of density wave instability phenomena

    International Nuclear Information System (INIS)

    Ambrosini, Walter

    1999-01-01

    In this paper the mechanism of density-wave oscillations in a boiling channel with uniform and constant heat flux is analysed by linear and nonlinear analytical tools. A model developed on the basis of a semi-implicit numerical discretization of governing partial differential equations is used to provide information on the transient distribution of relevant variables along the channel during instabilities. Furthermore, a lumped parameter model and a distributed parameter model developed in previous activities are also adopted for independent confirmation of the observed trends. The obtained results are finally put in relation with the picture of the phenomenon proposed in classical descriptions. (author)

  10. Concept of spatial channel theory applied to reactor shielding analysis

    International Nuclear Information System (INIS)

    Williams, M.L.; Engle, W.W. Jr.

    1977-01-01

    The concept of channel theory is used to locate spatial regions that are important in contributing to a shielding response. The method is analogous to the channel-theory method developed for ascertaining important energy channels in cross-section analysis. The mathematical basis for the theory is shown to be the generalized reciprocity relation, and sample problems are given to exhibit and verify properties predicted by the mathematical equations. A practical example is cited from the shielding analysis of the Fast Flux Test Facility performed at Oak Ridge National Laboratory, in which a perspective plot of channel-theory results was found useful in locating streaming paths around the reactor cavity shield

  11. A unit density method of grain analysis used to identify GABEergic neurons for electron microscopic autoradiographs

    International Nuclear Information System (INIS)

    Burry, R.W.

    1982-01-01

    The distribution of electron microscopic autoradiographic grains over neurons in cerebellar cultures incubated with [ 3 H]gamma-aminobutyric acid ([ 3 H]GABA) was examined. With the unit density method of grain analysis, the number of grains over each structure was tested against the total grain density for the entire section. If an individual structure has a grain density higher than the expected grain density, it is considered one of the group of heavily labeled structures. The expected grain density for each structure is calculated based on the area for that structure, the total grain density and the Poisson distribution. A different expected grain density can be calculated for any P value required. The method provides an adequate population of structures for morphological analysis but excludes weakly labeled structures and thus may underestimate the number of labeled structures. The unit density method of grain analysis showed, as expected, a group of cell bodies and synapses that was labeled heavily. Cultures incubated with other [ 3 H]amino acids did not have any heavily labeled synaptic elements. In addition, serial section analysis of sections showed that synapses heavily labeled with [ 3 H]GABA are seen in adjacent sections. The advantage of the unit density method of grain analysis is that it can be used to separate two groups of metabolically different neurons even when no morphological differences are present. (Auth.)

  12. Computational modeling applied to stress gradient analysis for metallic alloys

    International Nuclear Information System (INIS)

    Iglesias, Susana M.; Assis, Joaquim T. de; Monine, Vladimir I.

    2009-01-01

    Nowadays composite materials including materials reinforced by particles are the center of the researcher's attention. There are problems with the stress measurements in these materials, connected with the superficial stress gradient caused by the difference of the stress state of particles on the surface and in the matrix of the composite material. Computer simulation of diffraction profile formed by superficial layers of material allows simulate the diffraction experiment and gives the possibility to resolve the problem of stress measurements when the stress state is characterized by strong gradient. The aim of this paper is the application of computer simulation technique, initially developed for homogeneous materials, for diffraction line simulation of composite materials and alloys. Specifically we applied this technique for siluminum fabricated by powder metallurgy. (author)

  13. Performance analysis of numeric solutions applied to biokinetics of radionuclides

    International Nuclear Information System (INIS)

    Mingatos, Danielle dos Santos; Bevilacqua, Joyce da Silva

    2013-01-01

    Biokinetics models for radionuclides applied to dosimetry problems are constantly reviewed by ICRP. The radionuclide trajectory could be represented by compartmental models, assuming constant transfer rates between compartments. A better understanding of physiological or biochemical phenomena, improve the comprehension of radionuclide behavior in the human body and, in general, more complex compartmental models are proposed, increasing the difficulty of obtaining the analytical solution for the system of first order differential equations. Even with constant transfer rates numerical solutions must be carefully implemented because of almost singular characteristic of the matrix of coefficients. In this work we compare numerical methods with different strategies for ICRP-78 models for Thorium-228 and Uranium-234. The impact of uncertainty in the parameters of the equations is also estimated for local and global truncation errors. (author)

  14. Condition Monitoring of a Process Filter Applying Wireless Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2011-05-01

    Full Text Available This paper presents a novel wireless vibration-based method for monitoring the degree of feed filter clogging. In process industry, these filters are applied to prevent impurities entering the process. During operation, the filters gradually become clogged, decreasing the feed flow and, in the worst case, preventing it. The cleaning of the filter should therefore be carried out predictively in order to avoid equipment damage and unnecessary process downtime. The degree of clogging is estimated by first calculating the time domain indices from low frequency accelerometer samples and then taking the median of the processed values. Nine different statistical quantities are compared based on the estimation accuracy and criteria for operating in resource-constrained environments with particular focus on energy efficiency. The initial results show that the method is able to detect the degree of clogging, and the approach may be applicable to filter clogging monitoring.

  15. POWER SPECTRUM DENSITY (PSD ANALYSIS OF AUTOMOTIVE PEDAL-PAD

    Directory of Open Access Journals (Sweden)

    AHMED RITHAUDDEEN YUSOFF

    2016-04-01

    Full Text Available Vibration at the pedal-pad may contribute to discomfort of foot plantar fascia during driving. This is due to transmission of vibration to the mount, chassis, pedal, and then to the foot plantar fascia. This experimental study is conducted to determine the estimation of peak value using the power spectral density of the vertical vibration input at the foot. The power spectral density value is calculated based on the frequency range between 13 Hz to 18 Hz. This experiment was conducted using 12 subjects testing on three size of pedal-pads; small, medium and large. The result shows that peak value occurs at resonance frequency of 15 Hz. The PSD values at that resonance frequency are 0.251 (m/s2 2/Hz for small pedal-pad, followed by the medium pedal-pad is at 0.387 (m/s2 2/Hz and lastly for the large pedal-pad is at 0.483 (m/s22/Hz. The resultsindicate that during driving, the foot vibration when interact with the large pedal-pad contributed higher stimulus compared with the small and medium pedal-pad. The pedal-pad size plays an important role in the pedal element designs in terms of vibration-transfer from pedal-pads on the feet, particularly to provide comfort to the driver while driving.

  16. Sensitivity Analysis Applied in Design of Low Energy Office Building

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik

    2008-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  17. Applying Skinner's Analysis of Verbal Behavior to Persons with Dementia

    Science.gov (United States)

    Dixon, Mark; Baker, Jonathan C.; Sadowski, Katherine Ann

    2011-01-01

    Skinner's 1957 analysis of verbal behavior has demonstrated a fair amount of utility to teach language to children with autism and other various disorders. However, the learning of language can be forgotten, as is the case for many elderly suffering from dementia or other degenerative diseases. It appears possible that Skinner's operants may…

  18. Applying an Activity System to Online Collaborative Group Work Analysis

    Science.gov (United States)

    Choi, Hyungshin; Kang, Myunghee

    2010-01-01

    This study determines whether an activity system provides a systematic framework to analyse collaborative group work. Using an activity system as a unit of analysis, the research examined learner behaviours, conflicting factors and facilitating factors while students engaged in collaborative work via asynchronous computer-mediated communication.…

  19. Visual Analytics Applied to Image Analysis : From Segmentation to Classification

    NARCIS (Netherlands)

    Rauber, Paulo

    2017-01-01

    Image analysis is the field of study concerned with extracting information from images. This field is immensely important for commercial and scientific applications, from identifying people in photographs to recognizing diseases in medical images. The goal behind the work presented in this thesis is

  20. Applying AI tools to operational space environmental analysis

    Science.gov (United States)

    Krajnak, Mike; Jesse, Lisa; Mucks, John

    1995-01-01

    The U.S. Air Force and National Oceanic Atmospheric Agency (NOAA) space environmental operations centers are facing increasingly complex challenges meeting the needs of their growing user community. These centers provide current space environmental information and short term forecasts of geomagnetic activity. Recent advances in modeling and data access have provided sophisticated tools for making accurate and timely forecasts, but have introduced new problems associated with handling and analyzing large quantities of complex data. AI (Artificial Intelligence) techniques have been considered as potential solutions to some of these problems. Fielding AI systems has proven more difficult than expected, in part because of operational constraints. Using systems which have been demonstrated successfully in the operational environment will provide a basis for a useful data fusion and analysis capability. Our approach uses a general purpose AI system already in operational use within the military intelligence community, called the Temporal Analysis System (TAS). TAS is an operational suite of tools supporting data processing, data visualization, historical analysis, situation assessment and predictive analysis. TAS includes expert system tools to analyze incoming events for indications of particular situations and predicts future activity. The expert system operates on a knowledge base of temporal patterns encoded using a knowledge representation called Temporal Transition Models (TTM's) and an event database maintained by the other TAS tools. The system also includes a robust knowledge acquisition and maintenance tool for creating TTM's using a graphical specification language. The ability to manipulate TTM's in a graphical format gives non-computer specialists an intuitive way of accessing and editing the knowledge base. To support space environmental analyses, we used TAS's ability to define domain specific event analysis abstractions. The prototype system defines

  1. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSISAPPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  2. Low-density lipoprotein apheresis: an evidence-based analysis.

    Science.gov (United States)

    2007-01-01

    To assess the effectiveness and safety of low-density lipoprotein (LDL) apheresis performed with the heparin-induced extracorporeal LDL precipitation (HELP) system for the treatment of patients with refractory homozygous (HMZ) and heterozygous (HTZ) familial hypercholesterolemia (FH). BACKGROUND ON FAMILIAL HYPERCHOLESTEROLEMIA: Familial hypercholesterolemia is a genetic autosomal dominant disorder that is caused by several mutations in the LDL-receptor gene. The reduced number or absence of functional LDL receptors results in impaired hepatic clearance of circulating low-density lipoprotein cholesterol (LDL-C) particles, which results in extremely high levels of LDL-C in the bloodstream. Familial hypercholesterolemia is characterized by excess LDL-C deposits in tendons and arterial walls, early onset of atherosclerotic disease, and premature cardiac death. Familial hypercholesterolemia occurs in both HTZ and HMZ forms. Heterozygous FH is one of the most common monogenic metabolic disorders in the general population, occurring in approximately 1 in 500 individuals. Nevertheless, HTZ FH is largely undiagnosed and an accurate diagnosis occurs in only about 15% of affected patients in Canada. Thus, it is estimated that there are approximately 3,800 diagnosed and 21,680 undiagnosed cases of HTZ FH in Ontario. In HTZ FH patients, half of the LDL receptors do not work properly or are absent, resulting in plasma LDL-C levels 2- to 3-fold higher than normal (range 7-15mmol/L or 300-500mg/dL). Most HTZ FH patients are not diagnosed until middle age when either they or one of their siblings present with symptomatic coronary artery disease (CAD). Without lipid-lowering treatment, 50% of males die before the age of 50 and 25% of females die before the age of 60, from myocardial infarction or sudden death. In contrast to the HTZ form, HMZ FH is rare (occurring in 1 case per million persons) and more severe, with a 6- to 8-fold elevation in plasma LDL-C levels (range 15-25mmol

  3. Using genomic DNA-based probe-selection to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species

    Directory of Open Access Journals (Sweden)

    Townsend Henrik J

    2005-11-01

    Full Text Available Abstract High-density oligonucleotide (oligo arrays are a powerful tool for transcript profiling. Arrays based on GeneChip® technology are amongst the most widely used, although GeneChip® arrays are currently available for only a small number of plant and animal species. Thus, we have developed a method to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species and tested the method by analysing the transcriptome of Brassica oleracea L., a species for which no GeneChip® array is available, using a GeneChip® array designed for Arabidopsis thaliana (L. Heynh. Genomic DNA from B. oleracea was labelled and hybridised to the ATH1-121501 GeneChip® array. Arabidopsis thaliana probe-pairs that hybridised to the B. oleracea genomic DNA on the basis of the perfect-match (PM probe signal were then selected for subsequent B. oleracea transcriptome analysis using a .cel file parser script to generate probe mask files. The transcriptional response of B. oleracea to a mineral nutrient (phosphorus; P stress was quantified using probe mask files generated for a wide range of gDNA hybridisation intensity thresholds. An example probe mask file generated with a gDNA hybridisation intensity threshold of 400 removed > 68 % of the available PM probes from the analysis but retained >96 % of available A. thaliana probe-sets. Ninety-nine of these genes were then identified as significantly regulated under P stress in B. oleracea, including the homologues of P stress responsive genes in A. thaliana. Increasing the gDNA hybridisation intensity thresholds up to 500 for probe-selection increased the sensitivity of the GeneChip® array to detect regulation of gene expression in B. oleracea under P stress by up to 13-fold. Our open-source software to create probe mask files is freely available http://affymetrix.arabidopsis.info/xspecies/ and may be used to facilitate transcriptomic analyses of a wide range of plant and animal

  4. The industrial computerized tomography applied to the rock analysis

    International Nuclear Information System (INIS)

    Tetzner, Guaraciaba de Campos

    2008-01-01

    This work is a study of the possibilities of the technical applications of Computerized Tomography (CT) by using a device developed in the Radiation Technology Center (CTR), Institute for Energy and Nuclear Research (IPEN-CNEN/SP). The equipment consists of a gamma radiation source ( 60 Co), a scintillation detector of sodium iodide doped with thallium (NaI (Tl)), a mechanical system to move the object (rotation and translation) and a computer system. This operating system has been designed and developed by the CTR-IPEN-CNEN/SP team using national resources and technology. The first validation test of the equipment was carried out using a cylindrical sample of polypropylene (phantom) with two cylindrical cavities (holes) of 5 x 25 cm (diameter and length). In these tests, the holes were filled with materials of different density (air, oil and metal), whose attenuation coefficients are well known. The goal of this first test was to assess the response quality of the equipment. The present report is a study comparing computerized tomography equipment CTR-IPEN-CNEN/SP which uses a source of gamma radiation ( 60 Co) and other equipment provided by the Department of Geosciences in the University of Texas (CTUT), which uses an X-ray source (450 kV and 3.2 mA). As a result, the images obtained and the comprehensive study of the usefulness of the equipment developed here strengthened the proposition that the development of industrial computerized tomography is an important step toward consolidating the national technology. (author)

  5. The Private Lives of Minerals: Social Network Analysis Applied to Mineralogy and Petrology

    Science.gov (United States)

    Hazen, R. M.; Morrison, S. M.; Fox, P. A.; Golden, J. J.; Downs, R. T.; Eleish, A.; Prabhu, A.; Li, C.; Liu, C.

    2016-12-01

    Comprehensive databases of mineral species (rruff.info/ima) and their geographic localities and co-existing mineral assemblages (mindat.org) reveal patterns of mineral association and distribution that mimic social networks, as commonly applied to such varied topics as social media interactions, the spread of disease, terrorism networks, and research collaborations. Applying social network analysis (SNA) to common assemblages of rock-forming igneous and regional metamorphic mineral species, we find patterns of cohesion, segregation, density, and cliques that are similar to those of human social networks. These patterns highlight classic trends in lithologic evolution and are illustrated with sociograms, in which mineral species are the "nodes" and co-existing species form "links." Filters based on chemistry, age, structural group, and other parameters highlight visually both familiar and new aspects of mineralogy and petrology. We quantify sociograms with SNA metrics, including connectivity (based on the frequency of co-occurrence of mineral pairs), homophily (the extent to which co-existing mineral species share compositional and other characteristics), network closure (based on the degree of network interconnectivity), and segmentation (as revealed by isolated "cliques" of mineral species). Exploitation of large and growing mineral data resources with SNA offers promising avenues for discovering previously hidden trends in mineral diversity-distribution systematics, as well as providing new pedagogical approaches to teaching mineralogy and petrology.

  6. Thermodynamic analysis applied to a food-processing plant

    Energy Technology Data Exchange (ETDEWEB)

    Ho, J C; Chandratilleke, T T

    1987-01-01

    Two production lines of a multi-product, food-processing plant are selected for energy auditing and analysis. Thermodynamic analysis showed that the first-law and second-law efficiencies are 81.5% and 26.1% for the instant-noodles line and 23.6% and 7.9% for the malt-beverage line. These efficiency values are dictated primarily by the major energy-consuming sub-processes of each production line. Improvements in both first-law and second-law efficiencies are possible for the plants if the use of steam for heating is replaced by gaseous or liquid fuels, the steam ejectors for creating vacuum are replaced by a mechanical pump, and employing the cooler surroundings to assist in the cooling process.

  7. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  8. Environmental analysis applied to schools. Methodologies for data acquisition

    International Nuclear Information System (INIS)

    Andriola, L.; Ceccacci, R.

    2001-01-01

    The environment analysis is the basis of environmental management for organizations and it is considered as the first step in EMAS. It allows to identify, deal with the issues and have a clear knowledge on environmental performances of organizations. Schools can be included in the organizations. Nevertheless, the complexity of environmental issues and applicable regulations makes very difficult for a school, that wants to implement an environmental management system (EMAS, ISO 14001, etc.), to face this first step. So, it has been defined an instrument, that is easy but complete and coherent with reference standard, to let schools choose their process for elaborating the initial environmental revue. This instrument consists, essentially, in cards that, if completed, facilitate the drafting of the environmental analysis report [it

  9. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  10. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  11. Pair distribution function analysis applied to decahedral gold nanoparticles

    International Nuclear Information System (INIS)

    Nakotte, H; Silkwood, C; Kiefer, B; Karpov, D; Fohtung, E; Page, K; Wang, H-W; Olds, D; Manna, S; Fullerton, E E

    2017-01-01

    The five-fold symmetry of face-centered cubic (fcc) derived nanoparticles is inconsistent with the translational symmetry of a Bravais lattice and generally explained by multiple twinning of a tetrahedral subunit about a (joint) symmetry axis, with or without structural modification to the fcc motif. Unlike in bulk materials, five-fold twinning in cubic nanoparticles is common and strongly affects their structural, chemical, and electronic properties. To test and verify theoretical approaches, it is therefore pertinent that the local structural features of such materials can be fully characterized. The small size of nanoparticles severely limits the application of traditional analysis techniques, such as Bragg diffraction. A complete description of the atomic arrangement in nanoparticles therefore requires a departure from the concept of translational symmetry, and prevents fully evaluating all the structural features experimentally. We describe how recent advances in instrumentation, together with the increasing power of computing, are shaping the development of alternative analysis methods of scattering data for nanostructures. We present the application of Debye scattering and pair distribution function (PDF) analysis towards modeling of the total scattering data for the example of decahedral gold nanoparticles. PDF measurements provide a statistical description of the pair correlations of atoms within a material, allowing one to evaluate the probability of finding two atoms within a given distance. We explored the sensitivity of existing synchrotron x-ray PDF instruments for distinguishing four different simple models for our gold nanoparticles: a multiply twinned fcc decahedron with either a single gap or multiple distributed gaps, a relaxed body-centered orthorhombic (bco) decahedron, and a hybrid decahedron. The data simulations of the models were then compared with experimental data from synchrotron x-ray total scattering. We present our experimentally

  12. The colour analysis method applied to homogeneous rocks

    Directory of Open Access Journals (Sweden)

    Halász Amadé

    2015-12-01

    Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.

  13. Energy density functionals from the strong-coupling limit applied to the anions of the He isoelectronic series

    International Nuclear Information System (INIS)

    Mirtschink, André; Gori-Giorgi, Paola; Umrigar, C. J.; Morgan, John D.

    2014-01-01

    Anions and radicals are important for many applications including environmental chemistry, semiconductors, and charge transfer, but are poorly described by the available approximate energy density functionals. Here we test an approximate exchange-correlation functional based on the exact strong-coupling limit of the Hohenberg-Kohn functional on the prototypical case of the He isoelectronic series with varying nuclear charge Z − and to capture in general the physics of loosely bound anions, with a tendency to strongly overbind that can be proven mathematically. We also include corrections based on the uniform electron gas which improve the results

  14. Density of mixed alkali borate glasses: A structural analysis

    International Nuclear Information System (INIS)

    Doweidar, H.; El-Damrawi, G.M.; Moustafa, Y.M.; Ramadan, R.M.

    2005-01-01

    Density of mixed alkali borate glasses has been correlated with the glass structure. It is assumed that in such glasses each alkali oxide associates with a proportional quantity of B 2 O 3 . The number of BO 3 and BO 4 units related to each type of alkali oxide depends on the total concentration of alkali oxide. It is concluded that in mixed alkali borate glasses the volumes of structural units related to an alkali ion are the same as in the corresponding binary alkali borate glass. This reveals that each type of alkali oxide forms its own borate matrix and behaves as if not affected with the presence of the other alkali oxide. Similar conclusions are valid for borate glasses with three types of alkali oxide

  15. Intuitive Density Functional Theory-Based Energy Decomposition Analysis for Protein-Ligand Interactions.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2017-04-11

    First-principles quantum mechanical calculations with methods such as density functional theory (DFT) allow the accurate calculation of interaction energies between molecules. These interaction energies can be dissected into chemically relevant components such as electrostatics, polarization, and charge transfer using energy decomposition analysis (EDA) approaches. Typically EDA has been used to study interactions between small molecules; however, it has great potential to be applied to large biomolecular assemblies such as protein-protein and protein-ligand interactions. We present an application of EDA calculations to the study of ligands that bind to the thrombin protein, using the ONETEP program for linear-scaling DFT calculations. Our approach goes beyond simply providing the components of the interaction energy; we are also able to provide visual representations of the changes in density that happen as a result of polarization and charge transfer, thus pinpointing the functional groups between the ligand and protein that participate in each kind of interaction. We also demonstrate with this approach that we can focus on studying parts (fragments) of ligands. The method is relatively insensitive to the protocol that is used to prepare the structures, and the results obtained are therefore robust. This is an application to a real protein drug target of a whole new capability where accurate DFT calculations can produce both energetic and visual descriptors of interactions. These descriptors can be used to provide insights for tailoring interactions, as needed for example in drug design.

  16. Bacterial meningitis: a density-equalizing mapping analysis of the global research architecture.

    Science.gov (United States)

    Pleger, Niklas; Kloft, Beatrix; Quarcoo, David; Zitnik, Simona; Mache, Stefanie; Klingelhoefer, Doris; Groneberg, David A

    2014-09-30

    Bacterial meningitis is caused by a variety of pathogens and displays an important public health threat all over the world. Despite the necessity to develop customized public health-related research projects, a thorough study of global meningitis research is not present, so far. Therefore, the aim of this study was a combined density-equalizing and scientometric study. To evaluate the scientific efforts of bibliometric methods, density-equalizing algorithms and large-scale data analysis of the Web of Science were applied in the period between 1900 and 2007. From this, 7998 publications on bacterial meningitis have been found. With a number of 2698, most publications have been written by U.S. authors, followed by the UK (912), Germany (749) and France (620). This dominance can also be shown in the international cooperation. The specific citation analyses reveal that the nation with the highest average citation rate (citations per publications) was Norway (26.36), followed by Finland (24.16) and the U.S. (24.06). This study illustrates the architecture of global research on bacterial meningitis and points to the need for customized research programs with a focus on local public health issues in countries with a low development index, but high incidences, to target this global public health problem.

  17. Applying reliability centered maintenance analysis principles to inservice testing

    International Nuclear Information System (INIS)

    Flude, J.W.

    1994-01-01

    Federal regulations require nuclear power plants to use inservice test (IST) programs to ensure the operability of safety-related equipment. IST programs are based on American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code requirements. Many of these plants also use Reliability Centered Maintenance (RCM) to optimize system maintenance. ASME Code requirements are hard to change. The process for requesting authority to use an alternate strategy is long and expensive. The difficulties of obtaining this authority make the use of RCM method on safety-related systems not cost effective. An ASME research task force on Risk Based Inservice Testing is investigating changing the Code. The change will allow plants to apply RCM methods to the problem of maintenance strategy selection for safety-related systems. The research task force is working closely with the Codes and Standards sections to develop a process related to the RCM process. Some day plants will be able to use this process to develop more efficient and safer maintenance strategies

  18. A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Oktay Büyükaşık

    2010-12-01

    Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31

  19. Structural analysis of fuel rod applied to pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Faria, Danilo P.; Pinheiro, Andre Ricardo M.; Lotto, André A., E-mail: danilo.pinheiro@marinha.mil.br [Centro Tecnológico da Marinha em São Paulo (CTMSP), São Paulo, SP (Brazil)

    2017-07-01

    The design of fuel assemblies applied to Pressurized Water Reactors (PWR) has several requirements and acceptance criteria that must be attended for licensing. In the case of PWR fuel rods, an important mechanical structural requirement is to keep the radial stability when submitted to the coolant external pressure. In the framework of the Accident Tolerant Fuel (ATF) program new materials have been studied to replace zirconium based alloys as cladding, including iron-based alloys. In this sense, efforts have been made to evaluate the behavior of these materials under PWR conditions. The present work aims to evaluate the collapse cold pressure of a stainless steel thin-walled tube similar to that used as cladding material of fuel rods by means of the comparison of numeric data, and experimental results. As a result of the simulations, it was observed that the collapse pressure has a value intermediate value between those found by regulatory requirements and analytical calculations. The experiment was carried out for the validation of the computational model using test specimens of thin-walled tubes considering empty tube. The test specimens were sealed at both ends by means of welding. They were subjected to a high pressure device until the collapse of the tubes. Preliminary results obtained from experiments with the empty test specimens indicate that the computational model can be validated for stainless steel cladding, considering the difference between collapse pressure indicated in the regulatory document and the actual limit pressure concerning to radial instability of tubes with the studied characteristics. (author)

  20. Framework for applying probabilistic safety analysis in nuclear regulation

    International Nuclear Information System (INIS)

    Dimitrijevic, V.B.

    1997-01-01

    The traditional regulatory framework has served well to assure the protection of public health and safety. It has been recognized, however, that in a few circumstances, this deterministic framework has lead to an extensive expenditure on matters hat have little to do with the safe and reliable operation of the plant. Developments of plant-specific PSA have offered a new and powerful analytical tool in the evaluation of the safety of the plant. Using PSA insights as an aid to decision making in the regulatory process is now known as 'risk-based' or 'risk-informed' regulation. Numerous activities in the U.S. nuclear industry are focusing on applying this new approach to modify regulatory requirements. In addition, other approaches to regulations are in the developmental phase and are being evaluated. One is based on the performance monitoring and results and it is known as performance-based regulation. The other, called the blended approach, combines traditional deterministic principles with PSA insights and performance results. (author)

  1. The video densitometric analysis of the radiographic density and contrast

    International Nuclear Information System (INIS)

    Yoo, Young Sun; Lee, Sang Rae

    1992-01-01

    Generally the patient's absorb dose and readability of radiograms are affected by the exposure time and kVp of which are related with the radiographic density and contrast. The investigator carried studies to know the adequate level of exposure time and kVp to obtain the better readability of radiograms. In these studies dried human mandible with each other by video densitometry among various combination sets of the exposure time, such as, 5, 6, 8, 12, 15, 19, 24, 30, 38, 48 and 60, and varing level of kVp, such as 60, 65, 70, 80 and 90 respectively. The obtained results were as follows: 1. As exposure time and kVp were increased, radiographic density of radiograms was increased. 2. The subject contrast was increased where aluminum step wedge was thin and reduced in the reversed condition. As the thin aluminum step wedge, subject contrast was increased at the condition of lower kilovoltage than that of higher kilovoltage. 3. In the case of non-contrast was increased in the lower kilovoltage with the longer exposure time and the higher kiovoltage with the shorter exposure time. 4. At the condition of short exposure time, bitter readability of each reading item was obtained with the increment of the kilovoltage but at the opposite condition increasing exposure time worsened readability of radiograms.Since X-ray machine in the current dental clinics is fixed between the range of 60-70 kVp and 10 mA, good radiograms can be obtained by varied exposure time. But according to the conclusion of these studies, better radiograms can be obtained by using filtered high kVp and then the absorb dose to patient and exposure time can be reduced.

  2. A study of influence of material properties on magnetic flux density induced in magneto rheological damper through finite element analysis

    Directory of Open Access Journals (Sweden)

    Gurubasavaraju T. M.

    2018-01-01

    Full Text Available Magnetorheological fluids are smart materials, which are responsive to the external stimulus and changes their rheological properties. The damper performance (damping force is dependent on the magnetic flux density induced at the annular gap. Magnetic flux density developed at fluid flow gap of MR damper due to external applied current is also dependent on materials properties of components of MR damper (such as piston head, outer cylinder and piston rod. The present paper discus about the influence of different materials selected for components of the MR damper on magnetic effect using magnetostatic analysis. Different materials such as magnetic and low carbon steels are considered for piston head of the MR damper and magnetic flux density induced at fluid flow gap (filled with MR fluid is computed for different DC current applied to the electromagnetic coil. Developed magnetic flux is used for calculating the damper force using analytical method for each case. The low carbon steel has higher magnetic permeability hence maximum magnetic flux could pass through the piston head, which leads to higher value of magnetic effect induction at the annular gap. From the analysis results it is observed that the magnetic steel and low carbon steel piston head provided maximum magnetic flux density. Eventually the higher damping force can be observed for same case.

  3. A photoemission moments model using density functional and transfer matrix methods applied to coating layers on surfaces: Theory

    Science.gov (United States)

    Jensen, Kevin L.; Finkenstadt, Daniel; Shabaev, Andrew; Lambrakos, Samuel G.; Moody, Nathan A.; Petillo, John J.; Yamaguchi, Hisato; Liu, Fangze

    2018-01-01

    Recent experimental measurements of a bulk material covered with a small number of graphene layers reported by Yamaguchi et al. [NPJ 2D Mater. Appl. 1, 12 (2017)] (on bialkali) and Liu et al. [Appl. Phys. Lett. 110, 041607 (2017)] (on copper) and the needs of emission models in beam optics codes have lead to substantial changes in a Moments model of photoemission. The changes account for (i) a barrier profile and density of states factor based on density functional theory (DFT) evaluations, (ii) a Drude-Lorentz model of the optical constants and laser penetration depth, and (iii) a transmission probability evaluated by an Airy Transfer Matrix Approach. Importantly, the DFT results lead to a surface barrier profile of a shape similar to both resonant barriers and reflectionless wells: the associated quantum mechanical transmission probabilities are shown to be comparable to those recently required to enable the Moments (and Three Step) model to match experimental data but for reasons very different than the assumption by conventional wisdom that a barrier is responsible. The substantial modifications of the Moments model components, motivated by computational materials methods, are developed. The results prepare the Moments model for use in treating heterostructures and discrete energy level systems (e.g., quantum dots) proposed for decoupling the opposing metrics of performance that undermine the performance of advanced light sources like the x-ray Free Electron Laser. The consequences of the modified components on quantum yield, emittance, and emission models needed by beam optics codes are discussed.

  4. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  5. Prompt gamma cold neutron activation analysis applied to biological materials

    International Nuclear Information System (INIS)

    Rossbach, M.; Hiep, N.T.

    1992-01-01

    Cold neutrons at the external neutron guide laboratory (ELLA) of the KFA Juelich are used to demonstrate their profitable application for multielement characterization of biological materials. The set-up and experimental conditions of the Prompt Gamma Cold Neutron Activation Analysis (PGCNAA) device is described in detail. Results for C, H, N, S, K, B, and Cd using synthetic standards and the 'ratio' technique for calculation are reported for several reference materials and prove the method to be reliable and complementary with respect to the elements being determined by INAA. (orig.)

  6. Methods of economic analysis applied to fusion research. Final report

    International Nuclear Information System (INIS)

    1983-01-01

    In this and previous efforts ECON has provided economic assessment of a fusion research program. This phase of study focused on two tasks, the first concerned with the economics of fusion in an economy that relies heavily upon synthetic fuels, and the second concerned with the overall economic effects of pursuing soft energy technologies instead of hard technologies. This report is organized in two parts, the first entitled An Economic Analysis of Coproduction of Fusion-Electric Energy and Other Products, and the second entitled Arguments Associated with the Choice of Potential Energy Futures

  7. Artificial Neural Network methods applied to sentiment analysis

    OpenAIRE

    Ebert, Sebastian

    2017-01-01

    Sentiment Analysis (SA) is the study of opinions and emotions that are conveyed by text. This field of study has commercial applications for example in market research (e.g., “What do customers like and dislike about a product?”) and consumer behavior (e.g., “Which book will a customer buy next when he wrote a positive review about book X?”). A private person can benefit from SA by automatic movie or restaurant recommendations, or from applications on the computer or smart phone that adapt to...

  8. SPI Trend Analysis of New Zealand Applying the ITA Technique

    Directory of Open Access Journals (Sweden)

    Tommaso Caloiero

    2018-03-01

    Full Text Available A natural temporary imbalance of water availability, consisting of persistent lower-than-average or higher-than-average precipitation, can cause extreme dry and wet conditions that adversely impact agricultural yields, water resources, infrastructure, and human systems. In this study, dry and wet periods in New Zealand were expressed using the Standardized Precipitation Index (SPI. First, both the short term (3 and 6 months and the long term (12 and 24 months SPI were estimated, and then, possible trends in the SPI values were detected by means of a new graphical technique, the Innovative Trend Analysis (ITA, which allows the trend identification of the low, medium, and high values of a series. Results show that, in every area currently subject to drought, an increase in this phenomenon can be expected. Specifically, the results of this paper highlight that agricultural regions on the eastern side of the South Island, as well as the north-eastern regions of the North Island, are the most consistently vulnerable areas. In fact, in these regions, the trend analysis mainly showed a general reduction in all the values of the SPI: that is, a tendency toward heavier droughts and weaker wet periods.

  9. Dynamical Systems Analysis Applied to Working Memory Data

    Directory of Open Access Journals (Sweden)

    Fidan eGasimova

    2014-07-01

    Full Text Available In the present paper we investigate weekly fluctuations in the working memory capacity (WMC assessed over a period of two years. We use dynamical system analysis, specifically a second order linear differential equation, to model weekly variability in WMC in a sample of 112 9th graders. In our longitudinal data we use a B-spline imputation method to deal with missing data. The results show a significant negative frequency parameter in the data, indicating a cyclical pattern in weekly memory updating performance across time. We use a multilevel modeling approach to capture individual differences in model parameters and find that a higher initial performance level and a slower improvement at the MU task is associated with a slower frequency of oscillation. Additionally, we conduct a simulation study examining the analysis procedure’s performance using different numbers of B-spline knots and values of time delay embedding dimensions. Results show that the number of knots in the B-spline imputation influence accuracy more than the number of embedding dimensions.

  10. Confirmatory factor analysis applied to the Force Concept Inventory

    Science.gov (United States)

    Eaton, Philip; Willoughby, Shannon D.

    2018-06-01

    In 1995, Huffman and Heller used exploratory factor analysis to draw into question the factors of the Force Concept Inventory (FCI). Since then several papers have been published examining the factors of the FCI on larger sets of student responses and understandable factors were extracted as a result. However, none of these proposed factor models have been verified to not be unique to their original sample through the use of independent sets of data. This paper seeks to confirm the factor models proposed by Scott et al. in 2012, and Hestenes et al. in 1992, as well as another expert model proposed within this study through the use of confirmatory factor analysis (CFA) and a sample of 20 822 postinstruction student responses to the FCI. Upon application of CFA using the full sample, all three models were found to fit the data with acceptable global fit statistics. However, when CFA was performed using these models on smaller sample sizes the models proposed by Scott et al. and Eaton and Willoughby were found to be far more stable than the model proposed by Hestenes et al. The goodness of fit of these models to the data suggests that the FCI can be scored on factors that are not unique to a single class. These scores could then be used to comment on how instruction methods effect the performance of students along a single factor and more in-depth analyses of curriculum changes may be possible as a result.

  11. Multivariate calibration applied to the quantitative analysis of infrared spectra

    Energy Technology Data Exchange (ETDEWEB)

    Haaland, D.M.

    1991-01-01

    Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.

  12. Principles of Micellar Electrokinetic Capillary Chromatography Applied in Pharmaceutical Analysis

    Directory of Open Access Journals (Sweden)

    Árpád Gyéresi

    2013-02-01

    Full Text Available Since its introduction capillary electrophoresis has shown great potential in areas where electrophoretic techniques have rarely been used before, including here the analysis of pharmaceutical substances. The large majority of pharmaceutical substances are neutral from electrophoretic point of view, consequently separations by the classic capillary zone electrophoresis; where separation is based on the differences between the own electrophoretic mobilities of the analytes; are hard to achieve. Micellar electrokinetic capillary chromatography, a hybrid method that combines chromatographic and electrophoretic separation principles, extends the applicability of capillary electrophoretic methods to neutral analytes. In micellar electrokinetic capillary chromatography, surfactants are added to the buffer solution in concentration above their critical micellar concentrations, consequently micelles are formed; micelles that undergo electrophoretic migration like any other charged particle. The separation is based on the differential partitioning of an analyte between the two-phase system: the mobile aqueous phase and micellar pseudostationary phase. The present paper aims to summarize the basic aspects regarding separation principles and practical applications of micellar electrokinetic capillary chromatography, with particular attention to those relevant in pharmaceutical analysis.

  13. Analysis of bone mineral density of human bones for strength ...

    Indian Academy of Sciences (India)

    Different types of bone strength are required for various ... To statically analyse various methods to find BMD and related material ... bone study for research purpose. ..... and Dagoberto Vela Arvizo 2007 A qualitative stress analysis of a cross ...

  14. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    International Nuclear Information System (INIS)

    Angland, P.; Haberberger, D.; Ivancic, S. T.; Froula, D. H.

    2017-01-01

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of the χ2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.

  15. Two-dimensional DFA scaling analysis applied to encrypted images

    Science.gov (United States)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2015-01-01

    The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.

  16. Applying Multi-Criteria Analysis Methods for Fire Risk Assessment

    Directory of Open Access Journals (Sweden)

    Pushkina Julia

    2015-11-01

    Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.

  17. Neutron activation analysis applied to nutritional and foodstuff studies

    International Nuclear Information System (INIS)

    Maihara, Vera A.; Santos, Paola S.; Moura, Patricia L.C.; Castro, Lilian P. de; Avegliano, Roseane P.

    2009-01-01

    Neutron Activation Analysis, NAA, has been successfully used on a regularly basis in several areas of nutrition and foodstuffs. NAA has become an important and useful research tool due to the methodology's advantages. These include high accuracy, small quantities of samples and no chemical treatment. This technique allows the determination of important elements directly related to human health. NAA also provides data concerning essential and toxic concentrations in foodstuffs and specific diets. In this paper some studies in the area of nutrition which have been carried out at the Neutron Activation Laboratory of IPEN/CNEN-SP will be presented: a Brazilian total diet study: nutritional element dietary intakes of Sao Paulo state population; a study of trace element in maternal milk and the determination of essential trace elements in some edible mushrooms. (author)

  18. Downside Risk analysis applied to the Hedge Funds universe

    Science.gov (United States)

    Perelló, Josep

    2007-09-01

    Hedge Funds are considered as one of the portfolio management sectors which shows a fastest growing for the past decade. An optimal Hedge Fund management requires an appropriate risk metrics. The classic CAPM theory and its Ratio Sharpe fail to capture some crucial aspects due to the strong non-Gaussian character of Hedge Funds statistics. A possible way out to this problem while keeping the CAPM simplicity is the so-called Downside Risk analysis. One important benefit lies in distinguishing between good and bad returns, that is: returns greater or lower than investor's goal. We revisit most popular Downside Risk indicators and provide new analytical results on them. We compute these measures by taking the Credit Suisse/Tremont Investable Hedge Fund Index Data and with the Gaussian case as a benchmark. In this way, an unusual transversal lecture of the existing Downside Risk measures is provided.

  19. Applying importance-performance analysis to evaluate banking service quality

    Directory of Open Access Journals (Sweden)

    André Luís Policani Freitas

    2012-11-01

    Full Text Available In an increasingly competitive market, the identification of the most important aspects and the measurement of service quality as perceived by the customers are important actions taken by organizations which seek the competitive advantage. In particular, this scenario is typical of Brazilian banking sector. In this context, this article presents an exploratory case study in which the Importance-Performance Analysis (IPA was used to identify the strong and the weak points related to services provided by a bank. In order to check the reliability of the questionnaire, Cronbach's alpha and correlation analyses were used. The results are presented and some actions have been defined in order to improve the quality of services.

  20. Painleve singularity analysis applied to charged particle dynamics during reconnection

    International Nuclear Information System (INIS)

    Larson, J.W.

    1992-01-01

    For a plasma in the collisionless regime, test-particle modelling can lend some insight into the macroscopic behavior of the plasma, e.g. conductivity and heating. A common example for which this technique is used is a system with electric and magnetic fields given by B = δyx + zy + yz and E = εz, where δ, γ, and ε are constant parameters. This model can be used to model plasma behavior near neutral lines, (γ = 0), as well as current sheets (γ = 0, δ = 0). The integrability properties of the particle motion in such fields might affect the plasma's macroscopic behavior, and the author has asked the question open-quotes For what values of δ, γ, and ε is the system integrable?close quotes To answer this question, the author has employed Painleve singularity analysis, which is an examination of the singularity properties of a test particle's equations of motion in the complex time plane. This analysis has identified two field geometries for which the system's particle dynamics are integrable in terms of the second Painleve transcendent: the circular O-line case and the case of the neutral sheet configuration. These geometries yield particle dynamics that are integrable in the Liouville sense (i.e., there exist the proper number of integrals in involution) in an extended phase space which includes the time as a canonical coordinate, and this property is also true for nonzero γ. The singularity property tests also identified a large, dense set of X-line and O-line field geometries that yield dynamics that may possess the weak Painleve property. In the case of the X-line geometries, this result shows little relevance to the physical nature of the system, but the existence of a dense set of elliptical O-line geometries with this property may be related to the fact that for ε positive, one can construct asymptotic solutions in the limit t → ∞

  1. Economic analysis of medical management applied for left colostomy.

    Science.gov (United States)

    Savlovschi, C; Serban, D; Andreescu, Cv; Dascalu, Am; Pantu, H

    2013-01-01

    This paper presents an analysis of surgical treatment costs for left colostomy, aiming to calculate a medium cost per procedure and to identify the means to maximize the economic management of this type of surgicale procedure. A retrospective study was conducted on a group of 8 patients hospitalized in the 4th Surgery Department,Emergency University Hospital Bucharest, during the year 2012 for left colic neoplasms with obstruction signs that were operated on with a left colostomy. The followed parameters in the studied group of patients were represented by medical expenses, divided in: preoperative, intra-operative and immediate postoperative (postop. hospitalization). Two major types of colostomy were performed: left loop colostomy with intact tumour for 6 patients and left end colostomy and tumour resection (Hartmann's procedure) for 2 patients. The medium cost of this type of surgical intervention was 4396.807 RON, representing 1068.742 euro. Statistic data analysis didn't reveal average costs to vary with the type of procedure. The age of the study subjects was between 49 and 88, with an average of 61 years, without it being possible to establish a correlation between patient age and the level of medical spendings. Reducing the costs involved by left colostomy can be efficiently done by decreasing the number of days of hospitalisation in the following ways: preoperative preparation and assessment of the subject in an outpatient regimen; the accuracy of the surgical procedure with the decrease of early postoperative complications and antibiotherapy- the second major cause of increased postoperative costs. Celsius.

  2. An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis

    Science.gov (United States)

    2013-03-01

    AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks

  3. Current density and polarization curves for radial flow field patterns applied to PEMFCs (Proton Exchange Membrane Fuel Cells)

    International Nuclear Information System (INIS)

    Cano-Andrade, S.; Hernandez-Guerrero, A.; Spakovsky, M.R. von; Damian-Ascencio, C.E.; Rubio-Arana, J.C.

    2010-01-01

    A numerical solution of the current density and velocity fields of a 3-D PEM radial configuration fuel cell is presented. The energy, momentum and electrochemical equations are solved using a computational fluid dynamics (CFD) code based on a finite volume scheme. There are three cases of principal interest for this radial model: four channels, eight channels and twelve channels placed in a symmetrical path over the flow field plate. The figures for the current-voltage curves for the three models proposed are presented, and the main factors that affect the behavior of each of the curves are discussed. Velocity contours are presented for the three different models, showing how the fuel cell behavior is affected by the velocity variations in the radial configuration. All these results are presented for the case of high relative humidity. The favorable results obtained for this unconventional geometry seems to indicate that this geometry could replace the conventional commercial geometries currently in use.

  4. Power Spectral Density Specification and Analysis of Large Optical Surfaces

    Science.gov (United States)

    Sidick, Erkin

    2009-01-01

    The 2-dimensional Power Spectral Density (PSD) can be used to characterize the mid- and the high-spatial frequency components of the surface height errors of an optical surface. We found it necessary to have a complete, easy-to-use approach for specifying and evaluating the PSD characteristics of large optical surfaces, an approach that allows one to specify the surface quality of a large optical surface based on simulated results using a PSD function and to evaluate the measured surface profile data of the same optic in comparison with those predicted by the simulations during the specification-derivation process. This paper provides a complete mathematical description of PSD error, and proposes a new approach in which a 2-dimentional (2D) PSD is converted into a 1-dimentional (1D) one by azimuthally averaging the 2D-PSD. The 1D-PSD calculated this way has the same unit and the same profile as the original PSD function, thus allows one to compare the two with each other directly.

  5. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  6. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.

    Science.gov (United States)

    Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro

    2018-01-01

    The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.

  7. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density

    Directory of Open Access Journals (Sweden)

    Carmen Moret-Tatay

    2018-05-01

    Full Text Available The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area. The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.

  8. Perturbation Method of Analysis Applied to Substitution Measurements of Buckling

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-11-15

    Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.

  9. Soft tissue cephalometric analysis applied to Himachali ethnic population

    Directory of Open Access Journals (Sweden)

    Isha Aggarwal

    2016-01-01

    Full Text Available Introduction: The modern society considers facial attractiveness as an important physical attribute. The great variance in soft tissue drape of the human face complicates accurate assessment of the soft tissue profile, and it is a known fact that facial features of different ethnic groups differ significantly. This study was undertaken to establish norms for Himachali ethnic population. Materials and Methods: The sample comprised lateral cephalograms taken in natural head position of 100 normal individuals (50 males, 50 females. The cephalograms were analyzed by Arnett soft tissue cephalometric analysis for orthodontic diagnosis and treatment planning. Student's t-test was used to compare the means of the two groups. Results: Statistically significant differences were found between Himachali males and females in certain key parameters. Males have thicker soft tissue structures and a more acute nasolabial angle than females. Males have longer faces and females have greater interlabial gap and maxillary incisor exposure. Males have more deep-set facial structures than females. Conclusions: Statistically significant differences were found between Himachali males and females in certain key parameters. Differences were also noted between other ethnic groups and Himachali faces.

  10. MULTI-CRITERIA ANALYSIS APPLIED TO LANDSLIDE SUSCEPTIBILITY MAPPING

    Directory of Open Access Journals (Sweden)

    Mariana Madruga de Brito

    2017-10-01

    Full Text Available This paper presents the application of a multi-criteria analysis (MCA tool for landslide susceptibility assessment in Porto Alegre municipality, southern Brazil. A knowledge driven approach was used, aiming to ensure an optimal use of the available information. The landslide conditioning factors considered were slope, lithology, flow accumulation and distance from lineaments. Standardization of these factors was done through fuzzy membership functions, and evaluation of their relative importance for landslide predisposition was supported by the analytic hierarchy process (AHP, based on local expert knowledge. Finally, factors were integrated in a GIS environment using the weighted linear combination (WLC method. For validation, an inventory, including 107 landslide points recorded between 2007 and 2013 was used. Results indicated that 8.2% (39.40 km² of the study area are highly and very highly susceptible to landslides. An overall accuracy of 95% was found, with an area under the receiver operating characteristic (ROC curve of 0.960. Therefore, the resulting map can be regarded as useful for monitoring landslide-prone areas. Based on the findings, it is concluded that the proposed method is effective for susceptibility assessment since it yielded meaningful results and does not require extensive input data.

  11. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  12. Applied and computational harmonic analysis on graphs and networks

    Science.gov (United States)

    Irion, Jeff; Saito, Naoki

    2015-09-01

    In recent years, the advent of new sensor technologies and social network infrastructure has provided huge opportunities and challenges for analyzing data recorded on such networks. In the case of data on regular lattices, computational harmonic analysis tools such as the Fourier and wavelet transforms have well-developed theories and proven track records of success. It is therefore quite important to extend such tools from the classical setting of regular lattices to the more general setting of graphs and networks. In this article, we first review basics of graph Laplacian matrices, whose eigenpairs are often interpreted as the frequencies and the Fourier basis vectors on a given graph. We point out, however, that such an interpretation is misleading unless the underlying graph is either an unweighted path or cycle. We then discuss our recent effort of constructing multiscale basis dictionaries on a graph, including the Hierarchical Graph Laplacian Eigenbasis Dictionary and the Generalized Haar-Walsh Wavelet Packet Dictionary, which are viewed as generalizations of the classical hierarchical block DCTs and the Haar-Walsh wavelet packets, respectively, to the graph setting. Finally, we demonstrate the usefulness of our dictionaries by using them to simultaneously segment and denoise 1-D noisy signals sampled on regular lattices, a problem where classical tools have difficulty.

  13. Applying revised gap analysis model in measuring hotel service quality.

    Science.gov (United States)

    Lee, Yu-Cheng; Wang, Yu-Che; Chien, Chih-Hung; Wu, Chia-Huei; Lu, Shu-Chiung; Tsai, Sang-Bing; Dong, Weiwei

    2016-01-01

    With the number of tourists coming to Taiwan growing by 10-20 % since 2010, the number has increased due to an increasing number of foreign tourists, particularly after deregulation allowed admitting tourist groups, followed later on by foreign individual tourists, from mainland China. The purpose of this study is to propose a revised gap model to evaluate and improve service quality in Taiwanese hotel industry. Thus, service quality could be clearly measured through gap analysis, which was more effective for offering direction in developing and improving service quality. The HOLSERV instrument was used to identify and analyze service gaps from the perceptions of internal and external customers. The sample for this study included three main categories of respondents: tourists, employees, and managers. The results show that five gaps influenced tourists' evaluations of service quality. In particular, the study revealed that Gap 1 (management perceptions vs. customer expectations) and Gap 9 (service provider perceptions of management perceptions vs. service delivery) were more critical than the others in affecting perceived service quality, making service delivery the main area of improvement. This study contributes toward an evaluation of the service quality of the Taiwanese hotel industry from the perspectives of customers, service providers, and managers, which is considerably valuable for hotel managers. It was the aim of this study to explore all of these together in order to better understand the possible gaps in the hotel industry in Taiwan.

  14. THE ANALYSIS OF PARTIAL DISCHARGE (PD FROM ELECTRICAL TREEING IN LINEAR LOW DENSITY POLYETHYLENE (LLDPE AND HIGH DENSITY POLYETHYLENE (HDPE

    Directory of Open Access Journals (Sweden)

    Hermawan Hermawan

    2012-02-01

    Full Text Available Recently, the transmission of electric energy has been developed by insulated cable. The suitable materialas an insulated cable is LLDPE and HDPE. In order to understand the quality of insulation system, themeasuring of PD has done. PD could begin completely insulation failure (breakdown. Therefore, it is veryimportant to understand the characteristic of PD and the enclose event on it, because PD is a main factorwhich caused insulation failure.This paper presents the result of PD measurement in the laboratory that used needle-plane electrode. Itwas supported by equipments such as osiloskop Digital GDS 2104 GW Instek, HPF, and RC detector.Polymer sample that used in this research is LLDPE (Linier Low Density Polyethylene and HDPE with 20x 4 x 25 mm3 dimension in each. Needle was made by steel (length 50 mm and diameter 1.15 mm, it wasstick to the polymer material. The distance between needle to the plane is 5 mm. The applied voltage foreach sample was 16 kVrms, 18 kVrms, 20 kVrms and 22 kVrms. The Taking of PD data was done in thefirst minute, 10th minute, 20th and so on until 180th minute.The measurement result shows that the characteristic of PD number and maximum charge as a function oftime and as a function of applied voltage inclined increasing both on LLDPE and HDPE. But, PD intensityin HDPE is higher than LLDPE.

  15. Spatial analysis of NDVI readings with difference sampling density

    Science.gov (United States)

    Advanced remote sensing technologies provide research an innovative way of collecting spatial data for use in precision agriculture. Sensor information and spatial analysis together allow for a complete understanding of the spatial complexity of a field and its crop. The objective of the study was...

  16. Functional analysis of the cross-section form and X-ray density of human ulnae

    International Nuclear Information System (INIS)

    Hilgen, B.

    1981-01-01

    On 20 ulnae the form of the cross sections and distribution of the X-ray density were investigated in five different cross-section heights. The analysis of the cross-section forms was carried through using plane contraction figures, the X-ray density was established by means of the equidensity line method. (orig.) [de

  17. Statistical analysis and Kalman filtering applied to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.

    1990-08-01

    Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)

  18. Improving the flash flood frequency analysis applying dendrogeomorphological evidences

    Science.gov (United States)

    Ruiz-Villanueva, V.; Ballesteros, J. A.; Bodoque, J. M.; Stoffel, M.; Bollschweiler, M.; Díez-Herrero, A.

    2009-09-01

    Flash floods are one of the natural hazards that cause major damages worldwide. Especially in Mediterranean areas they provoke high economic losses every year. In mountain areas with high stream gradients, floods events are characterized by extremely high flow and debris transport rates. Flash flood analysis in mountain areas presents specific scientific challenges. On one hand, there is a lack of information on precipitation and discharge due to a lack of spatially well distributed gauge stations with long records. On the other hand, gauge stations may not record correctly during extreme events when they are damaged or the discharge exceeds the recordable level. In this case, no systematic data allows improvement of the understanding of the spatial and temporal occurrence of the process. Since historic documentation is normally scarce or even completely missing in mountain areas, tree-ring analysis can provide an alternative approach. Flash floods may influence trees in different ways: (1) tilting of the stem through the unilateral pressure of the flowing mass or individual boulders; (2) root exposure through erosion of the banks; (3) injuries and scars caused by boulders and wood transported in the flow; (4) decapitation of the stem and resulting candelabra growth through the severe impact of boulders; (5) stem burial through deposition of material. The trees react to these disturbances with specific growth changes such as abrupt change of the yearly increment and anatomical changes like reaction wood or callus tissue. In this study, we sampled 90 cross sections and 265 increment cores of trees heavily affected by past flash floods in order to date past events and to reconstruct recurrence intervals in two torrent channels located in the Spanish Central System. The first study site is located along the Pelayo River, a torrent in natural conditions. Based on the external disturbances of trees and their geomorphological position, 114 Pinus pinaster (Ait

  19. Influenza: a scientometric and density-equalizing analysis.

    Science.gov (United States)

    Fricke, Ralph; Uibel, Stefanie; Klingelhoefer, Doris; Groneberg, David A

    2013-09-30

    Novel influenza in 2009 caused by H1N1, as well as the seasonal influenza, still are a challenge for the public health sectors worldwide. An increasing number of publications referring to this infectious disease make it difficult to distinguish relevant research output. The current study used scientometric indices for a detailed investigation on influenza related research activity and the method of density equalizing mapping to make the differences of the overall research worldwide obvious. The aim of the study was to compare scientific effort over the time as well as geographical distribution including the cooperation on national and international level. Therefore, publication data was retrieved from Web of Science (WoS) of Thomson Scientific. Subsequently the data was analysed in order to show geographical distributions and the development of the research output over the time.The query retrieved 51,418 publications that are listed in WoS for the time interval from 1900 to 2009. There is a continuous increase in research output and general citation activity especially since 1990. The identified all in all 51,418 publications were published by researchers from 151 different countries. Scientists from the USA participate in more than 37 percent of all publications, followed by researchers from the UK and Germany with more than five percent. In addition, the USA is in the focus of international cooperation.In terms of number of publications on influenza, the Journal of Virology ranks first, followed by Vaccine and Virology. The highest impact factor (IF 2009) in this selection can be established for The Lancet (30.75). Robert Webster seems to be the most prolific author contributing the most publications in the field of influenza. This study reveals an increasing and wide research interest in influenza. Nevertheless, citation based-declaration of scientific quality should be considered critically due to distortion by self-citation and co-authorship.

  20. Applied machine learning in greenhouse simulation; new application and analysis

    Directory of Open Access Journals (Sweden)

    Morteza Taki

    2018-06-01

    Full Text Available Prediction the inside environment variables in greenhouses is very important because they play a vital role in greenhouse cultivation and energy lost especially in cold and hot regions. The greenhouse environment is an uncertain nonlinear system which classical modeling methods have some problems to solve it. So the main goal of this study is to select the best method between Artificial Neural Network (ANN and Support Vector Machine (SVM to estimate three different variables include inside air, soil and plant temperatures (Ta, Ts, Tp and also energy exchange in a polyethylene greenhouse in Shahreza city, Isfahan province, Iran. The environmental factors which influencing all the inside temperatures such as outside air temperature, wind speed and outside solar radiation were collected as data samples. In this research, 13 different training algorithms were used for ANN models (MLP-RBF. Based on K-fold cross validation and Randomized Complete Block (RCB methodology, the best model was selected. The results showed that the type of training algorithm and kernel function are very important factors in ANN (RBF and MLP and SVM models performance, respectively. Comparing RBF, MLP and SVM models showed that the performance of RBF to predict Ta, Tp and Ts variables is better according to small values of RMSE and MAPE and large value of R2 indices. The range of RMSE and MAPE factors for RBF model to predict Ta, Tp and Ts were between 0.07 and 0.12 °C and 0.28–0.50%, respectively. Generalizability and stability of the RBF model with 5-fold cross validation analysis showed that this method can use with small size of data groups. The performance of best model (RBF to estimate the energy lost and exchange in the greenhouse with heat transfer models showed that this method can estimate the real data in greenhouse and then predict the energy lost and exchange with high accuracy. Keywords: Black box method, Energy lost, Environmental situation, Energy

  1. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  2. Exploratory Factor Analysis as a Construct Validation Tool: (Mis)applications in Applied Linguistics Research

    Science.gov (United States)

    Karami, Hossein

    2015-01-01

    Factor analysis has been frequently exploited in applied research to provide evidence about the underlying factors in various measurement instruments. A close inspection of a large number of studies published in leading applied linguistic journals shows that there is a misconception among applied linguists as to the relative merits of exploratory…

  3. Applied Drama and the Higher Education Learning Spaces: A Reflective Analysis

    Science.gov (United States)

    Moyo, Cletus

    2015-01-01

    This paper explores Applied Drama as a teaching approach in Higher Education learning spaces. The exploration takes a reflective analysis approach by first examining the impact that Applied Drama has had on my career as a Lecturer/Educator/Teacher working in Higher Education environments. My engagement with Applied Drama practice and theory is…

  4. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    Science.gov (United States)

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  5. Analysis of line integrated electron density using plasma position data on Korea Superconducting Tokamak Advanced Research

    International Nuclear Information System (INIS)

    Nam, Y. U.; Chung, J.

    2010-01-01

    A 280 GHz single-channel horizontal millimeter-wave interferometer system has been installed for plasma electron density measurements on the Korea Superconducting Tokamak Advanced Research (KSTAR) device. This system has a triangular beam path that does not pass through the plasma axis due to geometrical constraints in the superconducting tokamak. The term line density on KSTAR has a different meaning from the line density of other tokamaks. To estimate the peak density and the mean density from the measured line density, information on the position of the plasma is needed. The information has been calculated from tangentially viewed visible images using the toroidal symmetry of the plasma. Interface definition language routines have been developed for this purpose. The calculated plasma position data correspond well to calculation results from magnetic analysis. With the position data and an estimated plasma profile, the peak density and the mean density have been obtained from the line density. From these results, changes of plasma density themselves can be separated from effects of the plasma movements, so they can give valuable information on the plasma status.

  6. Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations

    Science.gov (United States)

    Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana

    2018-05-01

    The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.

  7. Polycystic ovary syndrome: analysis of the global research architecture using density equalizing mapping.

    Science.gov (United States)

    Brüggmann, Dörthe; Berges, Lea; Klingelhöfer, Doris; Bauer, Jan; Bendels, Michael; Louwen, Frank; Jaque, Jenny; Groneberg, David A

    2017-06-01

    Polycystic ovary syndrome (PCOS) is the most common cause of female infertility worldwide. Although the related research output is constantly growing, no detailed global map of the scientific architecture has so far been created encompassing quantitative, qualitative, socioeconomic and gender aspects. We used the NewQIS platform to assess all PCOS-related publications indexed between 1900 and 2014 in the Web of Science, and applied density equalizing mapping projections, scientometric techniques and economic benchmarking procedures. A total of 6261 PCOS-specific publications and 703 international research collaborations were found. The USA was identified as the most active country in total and collaborative research activity. In the socioeconomic analysis, the USA was also ranked first (25.49 PCOS-related publications per gross domestic product [GDP]/capita), followed by the UK, Italy and Greece. When research activity was related to population size, Scandinavian countries and Greece were leading the field. For many highly productive countries, gender analysis revealed a high ratio of female scientists working on PCOS with the exception of Japan. In this study, we have created the first picture of global PCOS research, which largely differs from other gynaecologic conditions and indicates that most related research and collaborations originate from high-income countries. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  8. Beyond Time out and Table Time: Today's Applied Behavior Analysis for Students with Autism

    Science.gov (United States)

    Boutot, E. Amanda; Hume, Kara

    2012-01-01

    Recent mandates related to the implementation of evidence-based practices for individuals with autism spectrum disorder (ASD) require that autism professionals both understand and are able to implement practices based on the science of applied behavior analysis (ABA). The use of the term "applied behavior analysis" and its related concepts…

  9. Analysis of the photophysical properties of zearalenone using density functional theory

    Science.gov (United States)

    The intrinsic photophysical properties of the resorcylic acid moiety of zearalenone offer a convenient label free method to determine zearalenone levels in contaminated agricultural products. Density functional theory and steady-state fluorescence methods were applied to investigate the role of stru...

  10. Unsupervised neural spike sorting for high-density microelectrode arrays with convolutive independent component analysis.

    Science.gov (United States)

    Leibig, Christian; Wachtler, Thomas; Zeck, Günther

    2016-09-15

    Unsupervised identification of action potentials in multi-channel extracellular recordings, in particular from high-density microelectrode arrays with thousands of sensors, is an unresolved problem. While independent component analysis (ICA) achieves rapid unsupervised sorting, it ignores the convolutive structure of extracellular data, thus limiting the unmixing to a subset of neurons. Here we present a spike sorting algorithm based on convolutive ICA (cICA) to retrieve a larger number of accurately sorted neurons than with instantaneous ICA while accounting for signal overlaps. Spike sorting was applied to datasets with varying signal-to-noise ratios (SNR: 3-12) and 27% spike overlaps, sampled at either 11.5 or 23kHz on 4365 electrodes. We demonstrate how the instantaneity assumption in ICA-based algorithms has to be relaxed in order to improve the spike sorting performance for high-density microelectrode array recordings. Reformulating the convolutive mixture as an instantaneous mixture by modeling several delayed samples jointly is necessary to increase signal-to-noise ratio. Our results emphasize that different cICA algorithms are not equivalent. Spike sorting performance was assessed with ground-truth data generated from experimentally derived templates. The presented spike sorter was able to extract ≈90% of the true spike trains with an error rate below 2%. It was superior to two alternative (c)ICA methods (≈80% accurately sorted neurons) and comparable to a supervised sorting. Our new algorithm represents a fast solution to overcome the current bottleneck in spike sorting of large datasets generated by simultaneous recording with thousands of electrodes. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Invariant density analysis: modeling and analysis of the postural control system using Markov chains.

    Science.gov (United States)

    Hur, Pilwon; Shorter, K Alex; Mehta, Prashant G; Hsiao-Wecksler, Elizabeth T

    2012-04-01

    In this paper, a novel analysis technique, invariant density analysis (IDA), is introduced. IDA quantifies steady-state behavior of the postural control system using center of pressure (COP) data collected during quiet standing. IDA relies on the analysis of a reduced-order finite Markov model to characterize stochastic behavior observed during postural sway. Five IDA parameters characterize the model and offer physiological insight into the long-term dynamical behavior of the postural control system. Two studies were performed to demonstrate the efficacy of IDA. Study 1 showed that multiple short trials can be concatenated to create a dataset suitable for IDA. Study 2 demonstrated that IDA was effective at distinguishing age-related differences in postural control behavior between young, middle-aged, and older adults. These results suggest that the postural control system of young adults converges more quickly to their steady-state behavior while maintaining COP nearer an overall centroid than either the middle-aged or older adults. Additionally, larger entropy values for older adults indicate that their COP follows a more stochastic path, while smaller entropy values for young adults indicate a more deterministic path. These results illustrate the potential of IDA as a quantitative tool for the assessment of the quiet-standing postural control system.

  12. Introduction to applied statistical signal analysis guide to biomedical and electrical engineering applications

    CERN Document Server

    Shiavi, Richard

    2007-01-01

    Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical

  13. Inclusive Elementary Classroom Teacher Knowledge of and Attitudes toward Applied Behavior Analysis and Autism Spectrum Disorder and Their Use of Applied Behavior Analysis

    Science.gov (United States)

    McCormick, Jennifer A.

    2011-01-01

    The purpose of this study was to examine inclusive elementary teacher knowledge and attitude toward Autism Spectrum Disorder (ASD) and applied behavior analysis (ABA) and their use of ABA. Furthermore, this study examined if knowledge and attitude predicted use of ABA. A survey was developed and administered through a web-based program. Of the…

  14. Spherical electron momentum density distribution and Bayesian analysis of the renormalization parameter in Li metal

    International Nuclear Information System (INIS)

    Dobrzynski, Ludwik

    2000-01-01

    The Bayesian analysis of the spherical part of the electron momentum density was carried out with the goal of finding the best estimation of the spherically averaged renormalization parameter, z , quantifying the discontinuity in the electron momentum density distribution in Li metal. Three models parametrizing the electron momentum density were considered and nuisance parameters integrated out. The analysis show that the most likely value of z following from the data of Sakurai et al is in the range of 0.45-0.50, while 0.55 is obtained for the data of Schuelke et al . In the maximum entropy reconstruction of the spherical part of the electron momentum density three different algorithms were used. It is shown that all of them produce essentially the same results. The paper shows that the accurate Compton scattering experiments are capable of bringing information on this very important Fermiological aspect of the electron gas in a metal. (author)

  15. Benchmarking lithium amide versus amine bonding by charge density and energy decomposition analysis arguments.

    Science.gov (United States)

    Engelhardt, Felix; Maaß, Christian; Andrada, Diego M; Herbst-Irmer, Regine; Stalke, Dietmar

    2018-03-28

    Lithium amides are versatile C-H metallation reagents with vast industrial demand because of their high basicity combined with their weak nucleophilicity, and they are applied in kilotons worldwide annually. The nuclearity of lithium amides, however, modifies and steers reactivity, region- and stereo-selectivity and product diversification in organic syntheses. In this regard, it is vital to understand Li-N bonding as it causes the aggregation of lithium amides to form cubes or ladders from the polar Li-N covalent metal amide bond along the ring stacking and laddering principle. Deaggregation, however, is more governed by the Li←N donor bond to form amine adducts. The geometry of the solid state structures already suggests that there is σ- and π-contribution to the covalent bond. To quantify the mutual influence, we investigated [{(Me 2 NCH 2 ) 2 (C 4 H 2 N)}Li] 2 ( 1 ) by means of experimental charge density calculations based on the quantum theory of atoms in molecules (QTAIM) and DFT calculations using energy decomposition analysis (EDA). This new approach allows for the grading of electrostatic Li + N - , covalent Li-N and donating Li←N bonding, and provides a way to modify traditional widely-used heuristic concepts such as the -I and +I inductive effects. The electron density ρ ( r ) and its second derivative, the Laplacian ∇ 2 ρ ( r ), mirror the various types of bonding. Most remarkably, from the topological descriptors, there is no clear separation of the lithium amide bonds from the lithium amine donor bonds. The computed natural partial charges for lithium are only +0.58, indicating an optimal density supply from the four nitrogen atoms, while the Wiberg bond orders of about 0.14 au suggest very weak bonding. The interaction energy between the two pincer molecules, (C 4 H 2 N) 2 2- , with the Li 2 2+ moiety is very strong ( ca. -628 kcal mol -1 ), followed by the bond dissociation energy (-420.9 kcal mol -1 ). Partitioning the interaction energy

  16. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  17. A joint probability density function of wind speed and direction for wind energy analysis

    International Nuclear Information System (INIS)

    Carta, Jose A.; Ramirez, Penelope; Bueno, Celia

    2008-01-01

    A very flexible joint probability density function of wind speed and direction is presented in this paper for use in wind energy analysis. A method that enables angular-linear distributions to be obtained with specified marginal distributions has been used for this purpose. For the marginal distribution of wind speed we use a singly truncated from below Normal-Weibull mixture distribution. The marginal distribution of wind direction comprises a finite mixture of von Mises distributions. The proposed model is applied in this paper to wind direction and wind speed hourly data recorded at several weather stations located in the Canary Islands (Spain). The suitability of the distributions is judged from the coefficient of determination R 2 . The conclusions reached are that the joint distribution proposed in this paper: (a) can represent unimodal, bimodal and bitangential wind speed frequency distributions, (b) takes into account the frequency of null winds, (c) represents the wind direction regimes in zones with several modes or prevailing wind directions, (d) takes into account the correlation between wind speeds and its directions. It can therefore be used in several tasks involved in the evaluation process of the wind resources available at a potential site. We also conclude that, in the case of the Canary Islands, the proposed model provides better fits in all the cases analysed than those obtained with the models used in the specialised literature on wind energy

  18. Bulk density estimation using a 3-dimensional image acquisition and analysis system

    Directory of Open Access Journals (Sweden)

    Heyduk Adam

    2016-01-01

    Full Text Available The paper presents a concept of dynamic bulk density estimation of a particulate matter stream using a 3-d image analysis system and a conveyor belt scale. A method of image acquisition should be adjusted to the type of scale. The paper presents some laboratory results of static bulk density measurements using the MS Kinect time-of-flight camera and OpenCV/Matlab software. Measurements were made for several different size classes.

  19. Independent principal component analysis for simulation of soil water content and bulk density in a Canadian Watershed

    Directory of Open Access Journals (Sweden)

    Alaba Boluwade

    2016-09-01

    Full Text Available Accurate characterization of soil properties such as soil water content (SWC and bulk density (BD is vital for hydrologic processes and thus, it is importance to estimate θ (water content and ρ (soil bulk density among other soil surface parameters involved in water retention and infiltration, runoff generation and water erosion, etc. The spatial estimation of these soil properties are important in guiding agricultural management decisions. These soil properties vary both in space and time and are correlated. Therefore, it is important to find an efficient and robust technique to simulate spatially correlated variables. Methods such as principal component analysis (PCA and independent component analysis (ICA can be used for the joint simulations of spatially correlated variables, but they are not without their flaws. This study applied a variant of PCA called independent principal component analysis (IPCA that combines the strengths of both PCA and ICA for spatial simulation of SWC and BD using the soil data set from an 11 km2 Castor watershed in southern Quebec, Canada. Diagnostic checks using the histograms and cumulative distribution function (cdf both raw and back transformed simulations show good agreement. Therefore, the results from this study has potential in characterization of water content variability and bulk density variation for precision agriculture.

  20. Tapped density optimisation for four agricultural wastes - Part II: Performance analysis and Taguchi-Pareto

    Directory of Open Access Journals (Sweden)

    Ajibade Oluwaseyi Ayodele

    2016-01-01

    Full Text Available In this attempt, which is a second part of discussions on tapped density optimisation for four agricultural wastes (particles of coconut, periwinkle, palm kernel and egg shells, performance analysis for comparative basis is made. This paper pioneers a study direction in which optimisation of process variables are pursued using Taguchi method integrated with the Pareto 80-20 rule. Negative percentage improvements resulted when the optimal tapped density was compared with the average tapped density. However, the performance analysis between optimal tapped density and the peak tapped density values yielded positive percentage improvements for the four filler particles. The performance analysis results validate the effectiveness of using the Taguchi method in improving the tapped density properties of the filler particles. The application of the Pareto 80-20 rule to the table of parameters and levels produced revised tables of parameters and levels which helped to identify the factor-levels position of each parameter that is economical to optimality. The Pareto 80-20 rule also produced revised S/N response tables which were used to know the relevant S/N ratios that are relevant to optimality.

  1. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  2. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    Science.gov (United States)

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  3. Research in progress in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1990-01-01

    Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.

  4. Model Proposition for the Fiscal Policies Analysis Applied in Economic Field

    Directory of Open Access Journals (Sweden)

    Larisa Preda

    2007-05-01

    Full Text Available This paper presents a study about fiscal policy applied in economic development. Correlations between macroeconomics and fiscal indicators signify the first steep in our analysis. Next step is a new model proposal for the fiscal and budgetary choices. This model is applied on the date of the Romanian case.

  5. Advantages and Drawbacks of Applying Periodic Time-Variant Modal Analysis to Spur Gear Dynamics

    DEFF Research Database (Denmark)

    Pedersen, Rune; Santos, Ilmar; Hede, Ivan Arthur

    2010-01-01

    to ensure sufficient accuracy of the results. The method of time-variant modal analysis is applied, and the changes in the fundamental and the parametric resonance frequencies as a function of the rotational speed of the gears, are found. By obtaining the stationary and parametric parts of the time...... of applying the methodology to wind turbine gearboxes are addressed and elucidated....

  6. A Si IV/O IV Electron Density Diagnostic for the Analysis of IRIS Solar Spectra

    Science.gov (United States)

    Young, P. R.; Keenan, F. P.; Milligan, R. O.; Peter, H.

    2018-04-01

    Solar spectra of ultraviolet bursts and flare ribbons from the Interface Region Imaging Spectrograph (IRIS) have suggested high electron densities of > {10}12 cm‑3 at transition region temperatures of 0.1 MK, based on large intensity ratios of Si IV λ1402.77 to O IV λ1401.16. In this work, a rare observation of the weak O IV λ1343.51 line is reported from an X-class flare that peaked at 21:41 UT on 2014 October 24. This line is used to develop a theoretical prediction of the Si IV λ1402.77 to O IV λ1401.16 ratio as a function of density that is recommended to be used in the high-density regime. The method makes use of new pressure-dependent ionization fractions that take account of the suppression of dielectronic recombination at high densities. It is applied to two sequences of flare kernel observations from the October 24 flare. The first shows densities that vary between 3× {10}12 and 3× {10}13 cm‑3 over a seven-minute period, while the second location shows stable density values of around 2× {10}12 cm‑3 over a three-minute period.

  7. Applied behavior analysis: understanding and changing behavior in the community-a representative review.

    Science.gov (United States)

    Luyben, Paul D

    2009-01-01

    Applied behavior analysis, a psychological discipline, has been characterized as the science of behavior change (Chance, 2006). Research in applied behavior analysis has been published for approximately 40 years since the initial publication of the Journal of Applied Behavior Analysis in 1968. The field now encompasses a wide range of human behavior. Although much of the published research centers on problem behaviors that occur in schools and among people with disabilities, a substantial body of knowledge has emerged in community settings. This article provides a review of the behavioral community research published in the Journal of Applied Behavior Analysis as representative of this work, including research in the areas of home and family, health, safety, community involvement and the environment, recreation and sports, crime and delinquency, and organizations. In the interest of space, research in schools and with people with disabilities has been excluded from this review.

  8. Electron density analysis of 1-butyl-3-methylimidazolium chloride ionic liquid.

    Science.gov (United States)

    del Olmo, Lourdes; Morera-Boado, Cercis; López, Rafael; García de la Vega, José M

    2014-06-01

    An analysis of the electron density of different conformers of the 1-butyl-3-methylimidazolium chloride (bmimCl) ionic liquid by using DFT through the BVP86 density functional has been obtained within the framework of Bader's atom in molecules (AIM), localized orbital locator (LOL), natural bond orbital (NBO), and deformed atoms in molecules (DAM). We also present an analysis of the reduced density gradients that deliver the non-covalent interaction regions and allow to understand the nature of intermolecular interactions. The most polar conformer can be characterized as ionic by AIM, LOL, and DAM methods while the most stable and the least polar shows shared-type interactions. The NBO method allows to comprehend what causes the stabilization of the most stable conformer based on analysis of the second-order perturbative energy and the charge transferred among the natural orbitals involved in the interaction.

  9. Seismic analysis of a NPP reactor building using spectrum-compatible power spectral density functions

    International Nuclear Information System (INIS)

    Venancio Filho, F.; DeCarvalho Santos, S.H.; Joia, L.A.

    1987-01-01

    A numerical methodology to obtain Power Spectral Density Functions (PSDF) of ground accelerations, compatible with a given design response spectrum is presented. The PSDF's are derived from the statistical analysis of the amplitudes of the frequency components in a set of artificially generated time-histories matching the given spectrum. A so obtained PSDF is then used in the stochastic analysis of a NPP Reactor Building. The main results of this analysis are compared with the ones obtained by deterministic methods

  10. Seismic analysis of a NPP reactor building using spectrum-compatible power spectral density functions

    International Nuclear Information System (INIS)

    Venancio Filho, F.; Joia, L.A.

    1987-01-01

    A numerical methodology to obtain Power Spectral Density Functions (PSDF) of ground accelerations, compatible with a given design response spectrum is presented. The PSDF's are derived from the statistical analysis of the amplitudes of the frequency components in a set of artificially generated time-histories matching the given spectrum. A so obtained PSDF is then used in the stochastic analysis of a reactor building. The main results of this analysis are compared with the ones obtained by deterministic methods. (orig./HP)

  11. Breast Density and Risk of Breast Cancer in Asian Women: A Meta-analysis of Observational Studies.

    Science.gov (United States)

    Bae, Jong-Myon; Kim, Eun Hee

    2016-11-01

    The established theory that breast density is an independent predictor of breast cancer risk is based on studies targeting white women in the West. More Asian women than Western women have dense breasts, but the incidence of breast cancer is lower among Asian women. This meta-analysis investigated the association between breast density in mammography and breast cancer risk in Asian women. PubMed and Scopus were searched, and the final date of publication was set as December 31, 2015. The effect size in each article was calculated using the interval-collapse method. Summary effect sizes (sESs) and 95% confidence intervals (CIs) were calculated by conducting a meta-analysis applying a random effect model. To investigate the dose-response relationship, random effect dose-response meta-regression (RE-DRMR) was conducted. Six analytical epidemiology studies in total were selected, including one cohort study and five case-control studies. A total of 17 datasets were constructed by type of breast density index and menopausal status. In analyzing the subgroups of premenopausal vs. postmenopausal women, the percent density (PD) index was confirmed to be associated with a significantly elevated risk for breast cancer (sES, 2.21; 95% CI, 1.52 to 3.21; I 2 =50.0%). The RE-DRMR results showed that the risk of breast cancer increased 1.73 times for each 25% increase in PD in postmenopausal women (95% CI, 1.20 to 2.47). In Asian women, breast cancer risk increased with breast density measured using the PD index, regardless of menopausal status. We propose the further development of a breast cancer risk prediction model based on the application of PD in Asian women.

  12. Tracing the Fingerprint of Chemical Bonds within the Electron Densities of Hydrocarbons: A Comparative Analysis of the Optimized and the Promolecule Densities.

    Science.gov (United States)

    Keyvani, Zahra Alimohammadi; Shahbazian, Shant; Zahedi, Mansour

    2016-10-18

    The equivalence of the molecular graphs emerging from the comparative analysis of the optimized and the promolecule electron densities in two hundred and twenty five unsubstituted hydrocarbons was recently demonstrated [Keyvani et al. Chem. Eur. J. 2016, 22, 5003]. Thus, the molecular graph of an optimized molecular electron density is not shaped by the formation of the C-H and C-C bonds. In the present study, to trace the fingerprint of the C-H and C-C bonds in the electron densities of the same set of hydrocarbons, the amount of electron density and its Laplacian at the (3, -1) critical points associated with these bonds are derived from both optimized and promolecule densities, and compared in a newly proposed comparative analysis. The analysis not only conforms to the qualitative picture of the electron density build up between two atoms upon formation of a bond in between, but also quantifies the resulting accumulation of the electron density at the (3, -1) critical points. The comparative analysis also reveals a unified mode of density accumulation in the case of 2318 studied C-H bonds, but various modes of density accumulation are observed in the case of 1509 studied C-C bonds and they are classified into four groups. The four emerging groups do not always conform to the traditional classification based on the bond orders. Furthermore, four C-C bonds described as exotic bonds in previous studies, for example the inverted C-C bond in 1,1,1-propellane, are naturally distinguished from the analysis. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Sensitivity analysis of crustal correction for calculation of lithospheric mantle density from gravity data

    DEFF Research Database (Denmark)

    Herceg, Matija; Artemieva, Irina; Thybo, Hans

    2016-01-01

    for the crust and (ii) uncertainties in the seismic crustal structure (thickness and average VP velocities of individual crustal layers, including the sedimentary cover). We examine the propagation of these uncertainties into determinations of lithospheric mantle density and analyse both sources of possible......We investigate how uncertainties in seismic and density structure of the crust propagate to uncertainties in mantle density structure. The analysis is based on interpretation of residual upper-mantle gravity anomalies which are calculated by subtracting (stripping) the gravitational effect...... mantle, knowledge on uncertainties associated with incomplete information on crustal structure is of utmost importance for progress in gravity modelling. Uncertainties in the residual upper-mantle gravity anomalies result chiefly from uncertainties in (i) seismic VP velocity-density conversion...

  14. Genetic analysis of male reproductive success in relation to density in the zebrafish, Danio rerio

    Directory of Open Access Journals (Sweden)

    Jordan William C

    2006-04-01

    Full Text Available Abstract Background We used behavioural and genetic data to investigate the effects of density on male reproductive success in the zebrafish, Danio rerio. Based on previous measurements of aggression and courtship behaviour by territorial males, we predicted that they would sire more offspring than non-territorial males. Results Microsatellite analysis of paternity showed that at low densities territorial males had higher reproductive success than non-territorial males. However, at high density territorial males were no more successful than non-territorials and the sex difference in the opportunity for sexual selection, based on the parameter Imates, was low. Conclusion Male zebrafish exhibit two distinct mating tactics; territoriality and active pursuit of females. Male reproductive success is density dependent and the opportunity for sexual selection appears to be weak in this species.

  15. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  16. Thermodynamic analysis of energy density in pressure retarded osmosis: The impact of solution volumes and costs

    International Nuclear Information System (INIS)

    Reimund, Kevin K.

    2015-01-01

    A general method was developed for estimating the volumetric energy efficiency of pressure retarded osmosis via pressure-volume analysis of a membrane process. The resulting model requires only the osmotic pressure, π, and mass fraction, w, of water in the concentrated and dilute feed solutions to estimate the maximum achievable specific energy density, uu, as a function of operating pressure. The model is independent of any membrane or module properties. This method utilizes equilibrium analysis to specify the volumetric mixing fraction of concentrated and dilute solution as a function of operating pressure, and provides results for the total volumetric energy density of similar order to more complex models for the mixing of seawater and riverwater. Within the framework of this analysis, the total volumetric energy density is maximized, for an idealized case, when the operating pressure is π(1+√w -1 ), which is lower than the maximum power density operating pressure, Δπ/2, derived elsewhere, and is a function of the solute osmotic pressure at a given mass fraction. It was also found that a minimum 1.45 kmol of ideal solute is required to produce 1 kWh of energy while a system operating at "maximum power density operating pressure" requires at least 2.9 kmol. Utilizing this methodology, it is possible to examine the effects of volumetric solution cost, operation of a module at various pressure, and operation of a constant pressure module with various feed.

  17. Thermodynamic analysis of energy density in pressure retarded osmosis: The impact of solution volumes and costs

    Energy Technology Data Exchange (ETDEWEB)

    Reimund, Kevin K. [Univ. of Connecticut, Storrs, CT (United States). Dept. of Chemical and Biomolecular Engineering; McCutcheon, Jeffrey R. [Univ. of Connecticut, Storrs, CT (United States). Dept. of Chemical and Biomolecular Engineering; Wilson, Aaron D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-08-01

    A general method was developed for estimating the volumetric energy efficiency of pressure retarded osmosis via pressure-volume analysis of a membrane process. The resulting model requires only the osmotic pressure, π, and mass fraction, w, of water in the concentrated and dilute feed solutions to estimate the maximum achievable specific energy density, uu, as a function of operating pressure. The model is independent of any membrane or module properties. This method utilizes equilibrium analysis to specify the volumetric mixing fraction of concentrated and dilute solution as a function of operating pressure, and provides results for the total volumetric energy density of similar order to more complex models for the mixing of seawater and riverwater. Within the framework of this analysis, the total volumetric energy density is maximized, for an idealized case, when the operating pressure is π/(1+√w⁻¹), which is lower than the maximum power density operating pressure, Δπ/2, derived elsewhere, and is a function of the solute osmotic pressure at a given mass fraction. It was also found that a minimum 1.45 kmol of ideal solute is required to produce 1 kWh of energy while a system operating at “maximum power density operating pressure” requires at least 2.9 kmol. Utilizing this methodology, it is possible to examine the effects of volumetric solution cost, operation of a module at various pressure, and operation of a constant pressure module with various feed.

  18. Critical Analysis of a Website: A Critique based on Critical Applied Linguistics and Critical Discourse Analysis

    Directory of Open Access Journals (Sweden)

    Rina Agustina

    2013-05-01

    Full Text Available E-learning was easily found through browsing internet, which was mostly free of charge and provided various learning materials. Spellingcity.com was one of e-learning websites for teaching and learning English to learn spelling, vocabulary and writing, which offered various games and activities for young learners, 6 until 8 year old learners in particular. Having considered those constraints, this paper aimed to analyse the website from two different views: (1 critical applied linguistics  (CAL aspects and (2 critical  discourse analysis (CDA. After analysing the website using CAL and CDA, it was found that the website was adequate for beginner, in which it provided fun learning through games as well as challenged learners’ to test their vocabulary. Despite of these strengths, there were several issues required further thinking in terms of learners’ broad knowledge, such as, some of learning materials focused on states in America. It was quite difficult for EFL learners if they did not have adequate general knowledge. Thus, the findings implied that the website could be used as a supporting learning material, which accompanied textbooks and vocabulary exercise books.

  19. Applied Behavior Analysis: Its Impact on the Treatment of Mentally Retarded Emotionally Disturbed People.

    Science.gov (United States)

    Matson, Johnny L.; Coe, David A.

    1992-01-01

    This article reviews applications of the applied behavior analysis ideas of B. F. Skinner and others to persons with both mental retardation and emotional disturbance. The review examines implications of behavior analysis for operant conditioning and radical behaviorism, schedules of reinforcement, and emotion and mental illness. (DB)

  20. An Objective Comparison of Applied Behavior Analysis and Organizational Behavior Management Research

    Science.gov (United States)

    Culig, Kathryn M.; Dickinson, Alyce M.; McGee, Heather M.; Austin, John

    2005-01-01

    This paper presents an objective review, analysis, and comparison of empirical studies targeting the behavior of adults published in Journal of Applied Behavior Analysis (JABA) and Journal of Organizational Behavior Management (JOBM) between 1997 and 2001. The purpose of the comparisons was to identify similarities and differences with respect to…

  1. Sociosexuality Education for Persons with Autism Spectrum Disorders Using Principles of Applied Behavior Analysis

    Science.gov (United States)

    Wolfe, Pamela S.; Condo, Bethany; Hardaway, Emily

    2009-01-01

    Applied behavior analysis (ABA) has emerged as one of the most effective empirically based strategies for instructing individuals with autism spectrum disorders (ASD). Four ABA-based strategies that have been found effective are video modeling, visual strategies, social script fading, and task analysis. Individuals with ASD often struggle with…

  2. Applying Fuzzy and Probabilistic Uncertainty Concepts to the Material Flow Analysis of Palladium in Austria

    DEFF Research Database (Denmark)

    Laner, David; Rechberger, Helmut; Astrup, Thomas Fruergaard

    2015-01-01

    Material flow analysis (MFA) is a widely applied tool to investigate resource and recycling systems of metals and minerals. Owing to data limitations and restricted system understanding, MFA results are inherently uncertain. To demonstrate the systematic implementation of uncertainty analysis in ...

  3. [Particle Size and Number Density Online Analysis for Particle Suspension with Polarization-Differentiation Elastic Light Scattering Spectroscopy].

    Science.gov (United States)

    Chen, Wei-kang; Fang, Hui

    2016-03-01

    The basic principle of polarization-differentiation elastic light scattering spectroscopy based techniques is that under the linear polarized light incidence, the singlely scattered light from the superficial biological tissue and diffusively scattered light from the deep tissue can be separated according to the difference of polarization characteristics. The novel point of the paper is to apply this method to the detection of particle suspension and, to realize the simultaneous measurement of its particle size and number density in its natural status. We design and build a coaxial cage optical system, and measure the backscatter signal at a specified angle from a polystyrene microsphere suspension. By controlling the polarization direction of incident light with a linear polarizer and adjusting the polarization direction of collected light with another linear polarizer, we obtain the parallel polarized elastic light scattering spectrum and cross polarized elastic light scattering spectrum. The difference between the two is the differential polarized elastic light scattering spectrum which include only the single scattering information of the particles. We thus compare this spectrum to the Mie scattering calculation and extract the particle size. We then also analyze the cross polarized elastic light scattering spectrum by applying the particle size already extracted. The analysis is based on the approximate expressions taking account of light diffusing, from which we are able to obtain the number density of the particle suspension. We compare our experimental outcomes with the manufacturer-provided values and further analyze the influence of the particle diameter standard deviation on the number density extraction, by which we finally verify the experimental method. The potential applications of the method include the on-line particle quality monitoring for particle manufacture as well as the fat and protein density detection of milk products.

  4. Ethnic density effects for adult mental health: systematic review and meta-analysis of international studies.

    Science.gov (United States)

    Bécares, Laia; Dewey, Michael E; Das-Munshi, Jayati

    2017-12-14

    Despite increased ethnic diversity in more economically developed countries it is unclear whether residential concentration of ethnic minority people (ethnic density) is detrimental or protective for mental health. This is the first systematic review and meta-analysis covering the international literature, assessing ethnic density associations with mental health outcomes. We systematically searched Medline, PsychINFO, Sociological Abstracts, Web of Science from inception to 31 March 2016. We obtained additional data from study authors. We conducted random-effects meta-analysis taking into account clustering of estimates within datasets. Meta-regression assessed heterogeneity in studies due to ethnicity, country, generation, and area-level deprivation. Our main exposure was ethnic density, defined as the residential concentration of own racial/ethnic minority group. Outcomes included depression, anxiety and the common mental disorders (CMD), suicide, suicidality, psychotic experiences, and psychosis. We included 41 studies in the review, with meta-analysis of 12 studies. In the meta-analyses, we found a large reduction in relative odds of psychotic experiences [odds ratio (OR) 0.82 (95% confidence interval (CI) 0.76-0.89)] and suicidal ideation [OR 0.88 (95% CI 0.79-0.98)] for each 10 percentage-point increase in own ethnic density. For CMD, depression, and anxiety, associations were indicative of protective effects of own ethnic density; however, results were not statistically significant. Findings from narrative review were consistent with those of the meta-analysis. The findings support consistent protective ethnic density associations across countries and racial/ethnic minority populations as well as mental health outcomes. This may suggest the importance of the social environment in patterning detrimental mental health outcomes in marginalized and excluded population groups.

  5. A practical guide to propensity score analysis for applied clinical research.

    Science.gov (United States)

    Lee, Jaehoon; Little, Todd D

    2017-11-01

    Observational studies are often the only viable options in many clinical settings, especially when it is unethical or infeasible to randomly assign participants to different treatment régimes. In such case propensity score (PS) analysis can be applied to accounting for possible selection bias and thereby addressing questions of causal inference. Many PS methods exist, yet few guidelines are available to aid applied researchers in their conduct and evaluation of a PS analysis. In this article we give an overview of available techniques for PS estimation and application, balance diagnostic, treatment effect estimation, and sensitivity assessment, as well as recent advances. We also offer a tutorial that can be used to emulate the steps of PS analysis. Our goal is to provide information that will bring PS analysis within the reach of applied clinical researchers and practitioners. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Thermodynamic, energy efficiency, and power density analysis of reverse electrodialysis power generation with natural salinity gradients

    NARCIS (Netherlands)

    Yip, N.Y.; Vermaas, D.A.; Nijmeijer, K.; Elimelech, M.

    2014-01-01

    Reverse electrodialysis (RED) can harness the Gibbs free energy of mixing when fresh river water flows into the sea for sustainable power generation. In this study, we carry out a thermodynamic and energy efficiency analysis of RED power generation, and assess the membrane power density. First, we

  7. Alternative definitions of the frozen energy in energy decomposition analysis of density functional theory calculations.

    Science.gov (United States)

    Horn, Paul R; Head-Gordon, Martin

    2016-02-28

    In energy decomposition analysis (EDA) of intermolecular interactions calculated via density functional theory, the initial supersystem wavefunction defines the so-called "frozen energy" including contributions such as permanent electrostatics, steric repulsions, and dispersion. This work explores the consequences of the choices that must be made to define the frozen energy. The critical choice is whether the energy should be minimized subject to the constraint of fixed density. Numerical results for Ne2, (H2O)2, BH3-NH3, and ethane dissociation show that there can be a large energy lowering associated with constant density orbital relaxation. By far the most important contribution is constant density inter-fragment relaxation, corresponding to charge transfer (CT). This is unwanted in an EDA that attempts to separate CT effects, but it may be useful in other contexts such as force field development. An algorithm is presented for minimizing single determinant energies at constant density both with and without CT by employing a penalty function that approximately enforces the density constraint.

  8. Path analysis of the energy density of wood in eucalyptus clones.

    Science.gov (United States)

    Couto, A M; Teodoro, P E; Trugilho, P F

    2017-03-16

    Path analysis has been used for establishing selection criteria in genetic breeding programs for several crops. However, it has not been used in eucalyptus breeding programs yet. In the present study, we aimed to identify the wood technology traits that could be used as the criteria for direct and indirect selection of eucalyptus genotypes with high energy density of wood. Twenty-four eucalyptus clones were evaluated in a completely randomized design with five replications. The following traits were assessed: basic wood density, total extractives, lignin content, ash content, nitrogen content, carbon content, hydrogen content, sulfur content, oxygen content, higher calorific power, holocellulose, and energy density. After verifying the variability of all evaluated traits among the clones, a two-dimensional correlation network was used to determine the phenotypic patterns among them. The obtained coefficient of determination (0.94) presented a higher magnitude in relation to the effect of the residual variable, and it served as an excellent model for explaining the genetic effects related to the variations observed in the energy density of wood in all eucalyptus clones. However, for future studies, we recommend evaluating other traits, especially the morphological traits, because of the greater ease in their measurement. Selecting clones with high basic density is the most promising strategy for eucalyptus breeding programs that aim to increase the energy density of wood because of its high heritability and magnitude of the cause-and-effect relationship with this trait.

  9. Uncertainty propagation analysis applied to volcanic ash dispersal at Mt. Etna by using a Lagrangian model

    Science.gov (United States)

    de'Michieli Vitturi, Mattia; Pardini, Federica; Spanu, Antonio; Neri, Augusto; Vittoria Salvetti, Maria

    2015-04-01

    Volcanic ash clouds represent a major hazard for populations living nearby volcanic centers producing a risk for humans and a potential threat to crops, ground infrastructures, and aviation traffic. Lagrangian particle dispersal models are commonly used for tracking ash particles emitted from volcanic plumes and transported under the action of atmospheric wind fields. In this work, we present the results of an uncertainty propagation analysis applied to volcanic ash dispersal from weak plumes with specific focus on the uncertainties related to the grain-size distribution of the mixture. To this aim, the Eulerian fully compressible mesoscale non-hydrostatic model WRF was used to generate the driving wind, representative of the atmospheric conditions occurring during the event of November 24, 2006 at Mt. Etna. Then, the Lagrangian particle model LPAC (de' Michieli Vitturi et al., JGR 2010) was used to simulate the transport of mass particles under the action of atmospheric conditions. The particle motion equations were derived by expressing the Lagrangian particle acceleration as the sum of the forces acting along its trajectory, with drag forces calculated as a function of particle diameter, density, shape and Reynolds number. The simulations were representative of weak plume events of Mt. Etna and aimed to quantify the effect on the dispersal process of the uncertainty in the particle sphericity and in the mean and variance of a log-normal distribution function describing the grain-size of ash particles released from the eruptive column. In order to analyze the sensitivity of particle dispersal to these uncertain parameters with a reasonable number of simulations, and therefore with affordable computational costs, response surfaces in the parameter space were built by using the generalized polynomial chaos technique. The uncertainty analysis allowed to quantify the most probable values, as well as their pdf, of the number of particles as well as of the mean and

  10. An Analysis of Methods Section of Research Reports in Applied Linguistics

    OpenAIRE

    Patrícia Marcuzzo

    2011-01-01

    This work aims at identifying analytical categories and research procedures adopted in the analysis of research article in Applied Linguistics/EAP in order to propose a systematization of the research procedures in Genre Analysis. For that purpose, 12 research reports and interviews with four authors were analyzed. The analysis showed that the studies are concentrated on the investigation of the macrostructure or on the microstructure of research articles in different fields. Studies about th...

  11. Spectroscopic analysis of the density and temperature gradients in the laser-heated gas jet

    International Nuclear Information System (INIS)

    Matthews, D.L.; Lee, R.W.; Auerbach, J.M.

    1981-01-01

    We have performed an analysis of the x-ray spectra produced by a 1.0TW, lambda/sub L/-0.53μm laser-irradiated gas jet. Plasmas produced by ionization of neon, argon and N 2 + SF 6 gases were included in those measurements. Plasma electron density and temperature gradients were obtained by comparison of measured spectra with those produced by computer modeling. Density gradients were also obtained using laser interferometry. The limitations of this technique for plasma diagnosis will be discussed

  12. Population density and efficiency in energy consumption: An empirical analysis of service establishments

    International Nuclear Information System (INIS)

    Morikawa, Masayuki

    2012-01-01

    This study, using novel establishment-level microdata from the Energy Consumption Statistics, empirically analyzes the effect of urban density on energy intensity in the service sector. According to the analysis, the efficiency of energy consumption in service establishments is higher for densely populated cities. Quantitatively, after controlling for differences among industries, energy efficiency increases by approximately 12% when the density in a municipality population doubles. This result suggests that, given a structural transformation toward the service economy, deregulation of excessive restrictions hindering urban agglomeration, and investment in infrastructure in city centers would contribute to environmentally friendly economic growth.

  13. Lagrangian analysis of two-phase hydrodynamic and nuclear-coupled density-wave oscillations

    International Nuclear Information System (INIS)

    Lahey, R.T. Jr.; Yadigaroglu, G.

    1974-01-01

    The mathematical technique known as the ''method of characteristics'' has been used to construct an exact, analytical solution to predict the onset of density-wave oscillations in diabatic two-phase systems, such as Boiling Water Nuclear Reactors (BWR's). Specifically, heater wall dynamics, boiling boundary dynamics and nuclear kinetics have been accounted for in this analysis. Emphasis is placed on giving the reader a clear physical understanding of the phenomena of two-phase density-wave oscillations. Explanations are presented in terms of block diagram logic, and phasor representations of the various pressure drop perturbations are given. (U.S.)

  14. Transport analysis of high radiation and high density plasmas in the ASDEX Upgrade tokamak

    Directory of Open Access Journals (Sweden)

    Casali L.

    2014-01-01

    Full Text Available Future fusion reactors, foreseen in the “European road map” such as DEMO, will operate under more demanding conditions compared to present devices. They will require high divertor and core radiation by impurity seeding to reduce heat loads on divertor target plates. In addition, DEMO will have to work at high core densities to reach adequate fusion performance. The performance of fusion reactors depends on three essential parameters: temperature, density and energy confinement time. The latter characterizes the loss rate due to both radiation and transport processes. The DEMO foreseen scenarios described above were not investigated so far, but are now addressed at the ASDEX Upgrade tokamak. In this work we present the transport analysis of such scenarios. Plasma with high radiation by impurity seeding: transport analysis taking into account the radiation distribution shows no change in transport during impurity seeding. The observed confinement improvement is an effect of higher pedestal temperatures which extend to the core via stiffness. A non coronal radiation model was developed and compared to the bolometric measurements in order to provide a reliable radiation profile for transport calculations. High density plasmas with pellets: the analysis of kinetic profiles reveals a transient phase at the start of the pellet fuelling due to a slower density build up compared to the temperature decrease. The low particle diffusion can explain the confinement behaviour.

  15. Towards factor analysis exploration applied to positron emission tomography functional imaging for breast cancer characterization

    International Nuclear Information System (INIS)

    Rekik, W.; Ketata, I.; Sellami, L.; Ben slima, M.; Ben Hamida, A.; Chtourou, K.; Ruan, S.

    2011-01-01

    This paper aims to explore the factor analysis when applied to a dynamic sequence of medical images obtained using nuclear imaging modality, Positron Emission Tomography (PET). This latter modality allows obtaining information on physiological phenomena, through the examination of radiotracer evolution during time. Factor analysis of dynamic medical images sequence (FADMIS) estimates the underlying fundamental spatial distributions by factor images and the associated so-called fundamental functions (describing the signal variations) by factors. This method is based on an orthogonal analysis followed by an oblique analysis. The results of the FADMIS are physiological curves showing the evolution during time of radiotracer within homogeneous tissues distributions. This functional analysis of dynamic nuclear medical images is considered to be very efficient for cancer diagnostics. In fact, it could be applied for cancer characterization, vascularization as well as possible evaluation of response to therapy.

  16. International publication trends in the Journal of Applied Behavior Analysis: 2000-2014.

    Science.gov (United States)

    Martin, Neil T; Nosik, Melissa R; Carr, James E

    2016-06-01

    Dymond, Clarke, Dunlap, and Steiner's (2000) analysis of international publication trends in the Journal of Applied Behavior Analysis (JABA) from 1970 to 1999 revealed low numbers of publications from outside North America, leading the authors to express concern about the lack of international involvement in applied behavior analysis. They suggested that a future review would be necessary to evaluate any changes in international authorship in the journal. As a follow-up, we analyzed non-U.S. publication trends in the most recent 15 years of JABA and found similar results. We discuss potential reasons for the relative paucity of international authors and suggest potential strategies for increasing non-U.S. contributions to the advancement of behavior analysis. © 2015 Society for the Experimental Analysis of Behavior.

  17. Development of Optimized Core Design and Analysis Methods for High Power Density BWRs

    Science.gov (United States)

    Shirvan, Koroush

    Increasing the economic competitiveness of nuclear energy is vital to its future. Improving the economics of BWRs is the main goal of this work, focusing on designing cores with higher power density, to reduce the BWR capital cost. Generally, the core power density in BWRs is limited by the thermal Critical Power of its assemblies, below which heat removal can be accomplished with low fuel and cladding temperatures. The present study investigates both increases in the heat transfer area between ~he fuel and coolant and changes in operating parameters to achieve higher power levels while meeting the appropriate thermal as well as materials and neutronic constraints. A scoping study is conducted under the constraints of using fuel with cylindrical geometry, traditional materials and enrichments below 5% to enhance its licensability. The reactor vessel diameter is limited to the largest proposed thus far. The BWR with High power Density (BWR-HD) is found to have a power level of 5000 MWth, equivalent to 26% uprated ABWR, resulting into 20% cheaper O&M and Capital costs. This is achieved by utilizing the same number of assemblies, but with wider 16x16 assemblies and 50% shorter active fuel than that of the ABWR. The fuel rod diameter and pitch are reduced to just over 45% of the ABWR values. Traditional cruciform form control rods are used, which restricts the assembly span to less than 1.2 times the current GE14 design due to limitation on shutdown margin. Thus, it is possible to increase the power density and specific power by 65%, while maintaining the nominal ABWR Minimum Critical Power Ratio (MCPR) margin. The plant systems outside the vessel are assumed to be the same as the ABWR-Il design, utilizing a combination of active and passive safety systems. Safety analyses applied a void reactivity coefficient calculated by SIMULA TE-3 for an equilibrium cycle core that showed a 15% less negative coefficient for the BWR-HD compared to the ABWR. The feedwater

  18. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  19. Reconstruction and analysis of temperature and density spatial profiles inertial confinement fusion implosion cores

    International Nuclear Information System (INIS)

    Mancini, R. C.

    2007-01-01

    We discuss several methods for the extraction of temperature and density spatial profiles in inertial confinement fusion implosion cores based on the analysis of the x-ray emission from spectroscopic tracers added to the deuterium fuel. The ideas rely on (1) detailed spectral models that take into account collisional-radiative atomic kinetics, Stark broadened line shapes, and radiation transport calculations, (2) the availability of narrow-band, gated pinhole and slit x-ray images, and space-resolved line spectra of the core, and (3) several data analysis and reconstruction methods that include a multi-objective search and optimization technique based on a novel application of Pareto genetic algorithms to plasma spectroscopy. The spectroscopic analysis yields the spatial profiles of temperature and density in the core at the collapse of the implosion, and also the extent of shell material mixing into the core. Results are illustrated with data recorded in implosion experiments driven by the OMEGA and Z facilities

  20. Analysis of the Effect of Electron Density Perturbations Generated by Gravity Waves on HF Communication Links

    Science.gov (United States)

    Fagre, M.; Elias, A. G.; Chum, J.; Cabrera, M. A.

    2017-12-01

    In the present work, ray tracing of high frequency (HF) signals in ionospheric disturbed conditions is analyzed, particularly in the presence of electron density perturbations generated by gravity waves (GWs). The three-dimensional numerical ray tracing code by Jones and Stephenson, based on Hamilton's equations, which is commonly used to study radio propagation through the ionosphere, is used. An electron density perturbation model is implemented to this code based upon the consideration of atmospheric GWs generated at a height of 150 km in the thermosphere and propagating up into the ionosphere. The motion of the neutral gas at these altitudes induces disturbances in the background plasma which affects HF signals propagation. To obtain a realistic model of GWs in order to analyze the propagation and dispersion characteristics, a GW ray tracing method with kinematic viscosity and thermal diffusivity was applied. The IRI-2012, HWM14 and NRLMSISE-00 models were incorporated to assess electron density, wind velocities, neutral temperature and total mass density needed for the ray tracing codes. Preliminary results of gravity wave effects on ground range and reflection height are presented for low-mid latitude ionosphere.

  1. Visualization and analysis of pulsed ion beam energy density profile with infrared imaging

    Science.gov (United States)

    Isakova, Y. I.; Pushkarev, A. I.

    2018-03-01

    Infrared imaging technique was used as a surface temperature-mapping tool to characterize the energy density distribution of intense pulsed ion beams on a thin metal target. The technique enables the measuring of the total ion beam energy and the energy density distribution along the cross section and allows one to optimize the operation of an ion diode and control target irradiation mode. The diagnostics was tested on the TEMP-4M accelerator at TPU, Tomsk, Russia and on the TEMP-6 accelerator at DUT, Dalian, China. The diagnostics was applied in studies of the dynamics of the target cooling in vacuum after irradiation and in the experiments with target ablation. Errors caused by the target ablation and target cooling during measurements have been analyzed. For Fluke Ti10 and Fluke Ti400 infrared cameras, the technique can achieve surface energy density sensitivity of 0.05 J/cm2 and spatial resolution of 1-2 mm. The thermal imaging diagnostics does not require expensive consumed materials. The measurement time does not exceed 0.1 s; therefore, this diagnostics can be used for the prompt evaluation of the energy density distribution of a pulsed ion beam and during automation of the irradiation process.

  2. Novel analysis technique for measuring edge density fluctuation profiles with reflectometry in the Large Helical Device

    Science.gov (United States)

    Creely, A. J.; Ida, K.; Yoshinuma, M.; Tokuzawa, T.; Tsujimura, T.; Akiyama, T.; Sakamoto, R.; Emoto, M.; Tanaka, K.; Michael, C. A.

    2017-07-01

    A new method for measuring density fluctuation profiles near the edge of plasmas in the Large Helical Device (LHD) has been developed utilizing reflectometry combined with pellet-induced fast density scans. Reflectometer cutoff location was calculated by proportionally scaling the cutoff location calculated with fast far infrared laser interferometer (FIR) density profiles to match the slower time resolution results of the ray-tracing code LHD-GAUSS. Plasma velocity profile peaks generated with this reflectometer mapping were checked against velocity measurements made with charge exchange spectroscopy (CXS) and were found to agree within experimental uncertainty once diagnostic differences were accounted for. Measured density fluctuation profiles were found to peak strongly near the edge of the plasma, as is the case in most tokamaks. These measurements can be used in the future to inform inversion methods of phase contrast imaging (PCI) measurements. This result was confirmed with both a fixed frequency reflectometer and calibrated data from a multi-frequency comb reflectometer, and this method was applied successfully to a series of discharges. The full width at half maximum of the turbulence layer near the edge of the plasma was found to be only 1.5-3 cm on a series of LHD discharges, less than 5% of the normalized minor radius.

  3. Frequency-resolved interferometric measurement of local density fluctuations for turbulent combustion analysis

    International Nuclear Information System (INIS)

    Köberl, S; Giuliani, F; Woisetschläger, J; Fontaneto, F

    2010-01-01

    A validation of a novel interferometric measurement technique for the frequency-resolved detection of local density fluctuation in turbulent combustion analysis was performed in this work. Two laser vibrometer systems together with a signal analyser were used to obtain frequency spectra of density fluctuations across a methane-jet flame. Since laser vibrometry is based on interferometric techniques, the derived signals are path-integrals along the measurement beam. To obtain local frequency spectra of density fluctuations, long-time-averaged measurements from each of the two systems were performed using correlation functions and cross spectra. Results were compared to data recorded by standard interferometric techniques for validation purposes. Additionally, Raman scattering and laser Doppler velocimetry were used for flame characterization

  4. Thermodynamic, energy efficiency, and power density analysis of reverse electrodialysis power generation with natural salinity gradients.

    Science.gov (United States)

    Yip, Ngai Yin; Vermaas, David A; Nijmeijer, Kitty; Elimelech, Menachem

    2014-05-06

    Reverse electrodialysis (RED) can harness the Gibbs free energy of mixing when fresh river water flows into the sea for sustainable power generation. In this study, we carry out a thermodynamic and energy efficiency analysis of RED power generation, and assess the membrane power density. First, we present a reversible thermodynamic model for RED and verify that the theoretical maximum extractable work in a reversible RED process is identical to the Gibbs free energy of mixing. Work extraction in an irreversible process with maximized power density using a constant-resistance load is then examined to assess the energy conversion efficiency and power density. With equal volumes of seawater and river water, energy conversion efficiency of ∼ 33-44% can be obtained in RED, while the rest is lost through dissipation in the internal resistance of the ion-exchange membrane stack. We show that imperfections in the selectivity of typical ion exchange membranes (namely, co-ion transport, osmosis, and electro-osmosis) can detrimentally lower efficiency by up to 26%, with co-ion leakage being the dominant effect. Further inspection of the power density profile during RED revealed inherent ineffectiveness toward the end of the process. By judicious early discontinuation of the controlled mixing process, the overall power density performance can be considerably enhanced by up to 7-fold, without significant compromise to the energy efficiency. Additionally, membrane resistance was found to be an important factor in determining the power densities attainable. Lastly, the performance of an RED stack was examined for different membrane conductivities and intermembrane distances simulating high performance membranes and stack design. By thoughtful selection of the operating parameters, an efficiency of ∼ 37% and an overall gross power density of 3.5 W/m(2) represent the maximum performance that can potentially be achieved in a seawater-river water RED system with low

  5. Critic: a new program for the topological analysis of solid-state electron densities

    Science.gov (United States)

    Otero-de-la-Roza, A.; Blanco, M. A.; Pendás, A. Martín; Luaña, Víctor

    2009-01-01

    In this paper we introduce CRITIC, a new program for the topological analysis of the electron densities of crystalline solids. Two different versions of the code are provided, one adapted to the LAPW (Linear Augmented Plane Wave) density calculated by the WIEN2K package and the other to the ab initio Perturbed Ion ( aiPI) density calculated with the PI7 code. Using the converged ground state densities, CRITIC can locate their critical points, determine atomic basins and integrate properties within them, and generate several graphical representations which include topological atomic basins and primary bundles, contour maps of ρ and ∇ρ, vector maps of ∇ρ, chemical graphs, etc. Program summaryProgram title: CRITIC Catalogue identifier: AECB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL, version 3 No. of lines in distributed program, including test data, etc.: 1 206 843 No. of bytes in distributed program, including test data, etc.: 12 648 065 Distribution format: tar.gz Programming language: FORTRAN 77 and 90 Computer: Any computer capable of compiling Fortran Operating system: Unix, GNU/Linux Classification: 7.3 Nature of problem: Topological analysis of the electron density in periodic solids. Solution method: The automatic localization of the electron density critical points is based on a recursive partitioning of the Wigner-Seitz cell into tetrahedra followed by a Newton search from significant points on each tetrahedra. Plotting of and integration on the atomic basins is currently based on a new implementation of Keith's promega algorithm. Running time: Variable, depending on the task. From seconds to a few minutes for the localization of critical points. Hours to days for the determination of the atomic basins shape and properties. Times correspond to a typical 2007 PC.

  6. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Directory of Open Access Journals (Sweden)

    Eduardo Garza-Gisholt

    Full Text Available Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect

  7. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Science.gov (United States)

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  8. Measurement of reactor parameters of the 'Nora' reactor by noise analysis method - power spectral density

    International Nuclear Information System (INIS)

    Jovanovic, S.; Stormark, E.

    1966-01-01

    Measurements of reactor parameters the Nora reactor by Power Spectral Density (PSD) method are described. In case of critical reactor this method was applied for direct measurement of β/l ratio, β is the effective yield of delayed neutrons and l is the neutron lifetime. In case of subcritical reactor values of α+β-ρ/l were measured, ρ is the negative reactivity. Out coming PSD was measured by a filter or by ISAC. PSD was registered by ISAC as well as the auto-correlation function [sr

  9. August Dvorak (1894-1975): Early expressions of applied behavior analysis and precision teaching

    Science.gov (United States)

    Joyce, Bonnie; Moxley, Roy A.

    1988-01-01

    August Dvorak is best known for his development of the Dvorak keyboard. However, Dvorak also adapted and applied many behavioral and scientific management techniques to the field of education. Taken collectively, these techniques are representative of many of the procedures currently used in applied behavior analysis, in general, and especially in precision teaching. The failure to consider Dvorak's instructional methods may explain some of the discrepant findings in studies which compare the efficiency of the Dvorak to the standard keyboard. This article presents a brief background on the development of the standard (QWERTY) and Dvorak keyboards, describes parallels between Dvorak's teaching procedures and those used in precision teaching, reviews some of the comparative research on the Dvorak keyboard, and suggests some implications for further research in applying the principles of behavior analysis. PMID:22477993

  10. Exploring Peer Relationships, Friendships and Group Work Dynamics in Higher Education: Applying Social Network Analysis

    Science.gov (United States)

    Mamas, Christoforos

    2018-01-01

    This study primarily applied social network analysis (SNA) to explore the relationship between friendships, peer social interactions and group work dynamics within a higher education undergraduate programme in England. A critical case study design was adopted so as to allow for an in-depth exploration of the students' voice. In doing so, the views…

  11. Conversation after Right Hemisphere Brain Damage: Motivations for Applying Conversation Analysis

    Science.gov (United States)

    Barnes, Scott; Armstrong, Elizabeth

    2010-01-01

    Despite the well documented pragmatic deficits that can arise subsequent to Right Hemisphere Brain Damage (RHBD), few researchers have directly studied everyday conversations involving people with RHBD. In recent years, researchers have begun applying Conversation Analysis (CA) to the everyday talk of people with aphasia. This research programme…

  12. Evolution of Applied Behavior Analysis in the Treatment of Individuals With Autism

    Science.gov (United States)

    Wolery, Mark; Barton, Erin E.; Hine, Jeffrey F.

    2005-01-01

    Two issues of each volume of the Journal of Applied Behavior Analysis were reviewed to identify research reports focusing on individuals with autism. The identified articles were analyzed to describe the ages of individuals with autism, the settings in which the research occurred, the nature of the behaviors targeted for intervention, and the…

  13. Lovaas Model of Applied Behavior Analysis. What Works Clearinghouse Intervention Report

    Science.gov (United States)

    What Works Clearinghouse, 2010

    2010-01-01

    The "Lovaas Model of Applied Behavior Analysis" is a type of behavioral therapy that initially focuses on discrete trials: brief periods of one-on-one instruction, during which a teacher cues a behavior, prompts the appropriate response, and provides reinforcement to the child. Children in the program receive an average of 35 to 40 hours…

  14. Applied Behavior Analysis Programs for Autism: Sibling Psychosocial Adjustment during and Following Intervention Use

    Science.gov (United States)

    Cebula, Katie R.

    2012-01-01

    Psychosocial adjustment in siblings of children with autism whose families were using a home-based, applied behavior analysis (ABA) program was compared to that of siblings in families who were not using any intensive autism intervention. Data gathered from parents, siblings and teachers indicated that siblings in ABA families experienced neither…

  15. Applied Behavior Analysis in Autism Spectrum Disorders: Recent Developments, Strengths, and Pitfalls

    Science.gov (United States)

    Matson, Johnny L.; Turygin, Nicole C.; Beighley, Jennifer; Rieske, Robert; Tureck, Kimberly; Matson, Michael L.

    2012-01-01

    Autism has become one of the most heavily researched topics in the field of mental health and education. While genetics has been the most studied of all topics, applied behavior analysis (ABA) has also received a great deal of attention, and has arguably yielded the most promising results of any research area to date. The current paper provides a…

  16. A Self-Administered Parent Training Program Based upon the Principles of Applied Behavior Analysis

    Science.gov (United States)

    Maguire, Heather M.

    2012-01-01

    Parents often respond to challenging behavior exhibited by their children in such a way that unintentionally strengthens it. Applied behavior analysis (ABA) is a research-based science that has been proven effective in remediating challenging behavior in children. Although many parents could benefit from using strategies from the field of ABA with…

  17. A National UK Census of Applied Behavior Analysis School Provision for Children with Autism

    Science.gov (United States)

    Griffith, G. M.; Fletcher, R.; Hastings, R. P.

    2012-01-01

    Over more than a decade, specialist Applied Behavior Analysis (ABA) schools or classes for children with autism have developed in the UK and Ireland. However, very little is known internationally about how ABA is defined in practice in school settings, the characteristics of children supported in ABA school settings, and the staffing structures…

  18. A Case Study in the Misrepresentation of Applied Behavior Analysis in Autism: The Gernsbacher Lectures

    Science.gov (United States)

    Morris, Edward K

    2009-01-01

    I know that most men, including those at ease with problems of the greatest complexity, can seldom accept the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have proudly taught to others, and which they have woven, thread by thread, into the fabrics of their life. (Tolstoy, 1894) This article presents a case study in the misrepresentation of applied behavior analysis for autism based on Morton Ann Gernsbacher's presentation of a lecture titled “The Science of Autism: Beyond the Myths and Misconceptions.” Her misrepresentations involve the characterization of applied behavior analysis, descriptions of practice guidelines, reviews of the treatment literature, presentations of the clinical trials research, and conclusions about those trials (e.g., children's improvements are due to development, not applied behavior analysis). The article also reviews applied behavior analysis' professional endorsements and research support, and addresses issues in professional conduct. It ends by noting the deleterious effects that misrepresenting any research on autism (e.g., biological, developmental, behavioral) have on our understanding and treating it in a transdisciplinary context. PMID:22478522

  19. Principal component analysis of the CT density histogram to generate parametric response maps of COPD

    Science.gov (United States)

    Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.

  20. Connectivity-enhanced diffusion analysis reveals white matter density disruptions in first episode and chronic schizophrenia

    Directory of Open Access Journals (Sweden)

    Rachael G. Grazioplene

    Full Text Available Reduced fractional anisotropy (FA is a well-established correlate of schizophrenia, but it remains unclear whether these tensor-based differences are the result of axon damage and/or organizational changes and whether the changes are progressive in the adult course of illness. Diffusion MRI data were collected in 81 schizophrenia patients (54 first episode and 27 chronic and 64 controls. Analysis of FA was combined with “fixel-based” analysis, the latter of which leverages connectivity and crossing-fiber information to assess both fiber bundle density and organizational complexity (i.e., presence and magnitude of off-axis diffusion signal. Compared with controls, patients with schizophrenia displayed clusters of significantly lower FA in the bilateral frontal lobes, right dorsal centrum semiovale, and the left anterior limb of the internal capsule. All FA-based group differences overlapped substantially with regions containing complex fiber architecture. FA within these clusters was positively correlated with principal axis fiber density, but inversely correlated with both secondary/tertiary axis fiber density and voxel-wise fiber complexity. Crossing fiber complexity had the strongest (inverse association with FA (r = −0.82. When crossing fiber structure was modeled in the MRtrix fixel-based analysis pipeline, patients exhibited significantly lower fiber density compared to controls in the dorsal and posterior corpus callosum (central, postcentral, and forceps major. Findings of lower FA in patients with schizophrenia likely reflect two inversely related signals: reduced density of principal axis fiber tracts and increased off-axis diffusion sources. Whereas the former confirms at least some regions where myelin and or/axon count are lower in schizophrenia, the latter indicates that the FA signal from principal axis fiber coherence is broadly contaminated by macrostructural complexity, and therefore does not necessarily reflect

  1. MRSA: a density-equalizing mapping analysis of the global research architecture.

    Science.gov (United States)

    Addicks, Johann P; Uibel, Stefanie; Jensen, Anna-Maria; Bundschuh, Matthias; Klingelhoefer, Doris; Groneberg, David A

    2014-09-30

    Methicillin-resistant Staphylococcus aureus (MRSA) has evolved as an alarming public health thread due to its global spread as hospital and community pathogen. Despite this role, a scientometric analysis has not been performed yet. Therefore, the NewQIS platform was used to conduct a combined density-equalizing mapping and scientometric study. As database, the Web of Science was used, and all entries between 1961 and 2007 were analyzed. In total, 7671 entries were identified. Density equalizing mapping demonstrated a distortion of the world map for the benefit of the USA as leading country with a total output of 2374 publications, followed by the UK (1030) and Japan (862). Citation rate analysis revealed Portugal as leading country with a rate of 35.47 citations per article, followed by New Zealand and Denmark. Country cooperation network analyses showed 743 collaborations with US-UK being most frequent. Network citation analyses indicated the publications that arose from the cooperation of USA and France as well as USA and Japan as the most cited (75.36 and 74.55 citations per collaboration article, respectively). The present study provides the first combined density-equalizing mapping and scientometric analysis of MRSA research. It illustrates the global MRSA research architecture. It can be assumed that this highly relevant topic for public health will achieve even greater dimensions in the future.

  2. Growth Analysis of Fenugreek (Trigonella foenum- graecum L. under Various Levels of Nitrogen and Plant Density

    Directory of Open Access Journals (Sweden)

    L Bazrkar-Khatibani

    2018-02-01

    Full Text Available Introduction Fenugreek (Trigonella foenum-graecum L. is a specific condiment crop mostly grown for its edible parts, and is used as a green fodder and fresh vegetable. The seeds have medicinal value solely against digestive disorders, whereas its leaves are rich source of minerals and nutrients. The growth and yield of fenugreek is particularly affected by the application of nitrogen fertilizer and planting arrangement. Plant growth is a process of biomass accumulation which in turn is derived out of the interaction of the respiration, photosynthesis, water relations, long-distance transport, and mineral nutrition processes. Growth is the most important process in predicting plant reactions to environment. Irradiance, temperature, soil-water potential, nutrient supply and enhanced concentrations of atmospheric carbon dioxide are among some external components influencing crop growth and development. Growth analysis is a useful tool in studying the complex interactions between plant growth and the environment, clarifying and interpreting physiological responses. Plants total dry matter (TDM production and accumulation can be appraised via relative growth rate (RGR and crop growth rate (CGR which are the most important growth indices. Leaf area index (LAI is a factor of crop growth analysis that accounts for the potential of the crop to assimilate light energy and is a determinant component in understanding the function of many crop management practices. Materials and Methods A field investigation was conducted in a paddy field at Shaft County (Guilan Province for eight consecutive months (from November 2009 to June 2010, to study the effect of four levels of nitrogen fertilizer (0, 25, 50 and 75 Kg N ha-1 and four levels of planting density (60, 80,100, and 120 plants m-2 on the growth indices of fenugreek (Trigonella foenum graecum L. crop. The soil for the experiment was loam in texture and strongly acidic in reaction (pH 4.5. Sixteen treatment

  3. Bone density as a marker for local response to radiotherapy of spinal bone metastases in women with breast cancer: a retrospective analysis

    International Nuclear Information System (INIS)

    Foerster, Robert; Eisele, Christian; Bruckner, Thomas; Bostel, Tilman; Schlampp, Ingmar; Wolf, Robert; Debus, Juergen; Rief, Harald

    2015-01-01

    We designed this study to quantify the effects of radiotherapy (RT) on bone density as a local response in spinal bone metastases of women with breast cancer and, secondly, to establish bone density as an accurate and reproducible marker for assessment of local response to RT in spinal bone metastases. We retrospectively assessed 135 osteolytic spinal metastases in 115 women with metastatic breast cancer treated at our department between January 2000 and January 2012. Primary endpoint was to compare bone density in the bone metastases before, 3 months after and 6 months after RT. Bone density was measured in Hounsfield units (HU) in computed tomography scans. We calculated mean values in HU and the standard deviation (SD) as a measurement of bone density before, 3 months and 6 months after RT. T-test was used for statistical analysis of difference in bone density as well as for univariate analysis of prognostic factors for difference in bone density 3 and 6 months after RT. Mean bone density was 194.8 HU ± SD 123.0 at baseline. Bone density increased significantly by a mean of 145.8 HU ± SD 139.4 after 3 months (p = .0001) and by 250.3 HU ± SD 147.1 after 6 months (p < .0001). Women receiving bisphosphonates showed a tendency towards higher increase in bone density in the metastases after 3 months (152.6 HU ± SD 141.9 vs. 76.0 HU ± SD 86.1; p = .069) and pathological fractures before RT were associated with a significantly higher increase in bone density after 3 months (202.3 HU ± SD 161.9 vs. 130.3 HU ± SD 129.2; p = .013). Concomitant chemotherapy (ChT) or endocrine therapy (ET), hormone receptor status, performance score, applied overall RT dose and prescription of a surgical corset did not correlate with a difference in bone density after RT. Bone density measurement in HU is a practicable and reproducible method for assessment of local RT response in osteolytic metastases in breast cancer. Our analysis demonstrated an excellent local response within

  4. Preliminary Hazard Analysis applied to Uranium Hexafluoride - UF6 production plant

    International Nuclear Information System (INIS)

    Tomzhinsky, David; Bichmacher, Ricardo; Braganca Junior, Alvaro; Peixoto, Orpet Jose

    1996-01-01

    The purpose of this paper is to present the results of the Preliminary hazard Analysis applied to the UF 6 Production Process, which is part of the UF 6 Conversion Plant. The Conversion Plant has designed to produce a high purified UF 6 in accordance with the nuclear grade standards. This Preliminary Hazard Analysis is the first step in the Risk Management Studies, which are under current development. The analysis evaluated the impact originated from the production process in the plant operators, members of public, equipment, systems and installations as well as the environment. (author)

  5. The Pressure and Magnetic Flux Density Analysis of Helical-Type DC Electromagnetic Pump

    International Nuclear Information System (INIS)

    Lee, Geun Hyeong; Kim, Hee Reyoung

    2016-01-01

    The developed pressure was made by only electromagnetic force eliminating probability of impurities contact, therefore the high reactivity materials such as alkali were best match to electromagnetic pump. The heavy ion accelerator facility by Rare Isotope Science Project (RISP) in Korea is trying to construct accelerator using liquid lithium for high efficiency of acceleration by decreasing charge state. The helical-type DC electromagnetic pump was employed to make a charge stripper that decrease charge state of heavy ion. The specification of electromagnetic pump was developed pressure of 15 bar with flowrate of 6 cc/s in the condition of 200℃. The pressure of DC electromagnetic pump was analyzed in the aspects of current and number of duct turns. The developed pressure was almost proportional to input current because relatively low flowrate made negligible of the electromotive force and hydraulic pressure drop. The pressure and magnetic flux density of helical-type DC electromagnetic pump were analyzed. The pressure was proportion to input current and number of duct turns, and magnetic flux density was higher when ferromagnet was applied at electromagnetic pump. It seems that number of duct turns could be increase and ferromagnet could be applied in order to increase pressure of DC electromagnetic pump with constant input current

  6. Analysis of Mid-Latitude Plasma Density Irregularities in the Presence of Finite Larmor Radius Effects

    Science.gov (United States)

    Sotnikov, V. I.; Kim, T. C.; Mishin, E. V.; Kil, H.; Kwak, Y. S.; Paraschiv, I.

    2017-12-01

    Ionospheric irregularities cause scintillations of electromagnetic signals that can severely affect navigation and transionospheric communication, in particular during space storms. At mid-latitudes the source of F-region Field Aligned Irregularities (FAI) is yet to be determined. They can be created in enhanced subauroral flow channels (SAI/SUBS), where strong gradients of electric field, density and plasma temperature are present. Another important source of FAI is connected with Medium-scale travelling ionospheric disturbances (MSTIDs). Related shear flows and plasma density troughs point to interchange and Kelvin-Helmholtz type instabilities as a possible source of plasma irregularities. A model of nonlinear development of these instabilities based on the two-fluid hydrodynamic description with inclusion of finite Larmor radius effects will be presented. This approach allows to resolve density irregularities on the meter scale. A numerical code in C language to solve the derived nonlinear equations for analysis of interchange and flow velocity shear instabilities in the ionosphere was developed. This code will be used to analyze competition between interchange and Kelvin-Helmholtz instabilities in the mid-latitude region. The high-resolution simulations with continuous density and velocity profiles will be driven by the ambient conditions corresponding to the in situ data obtained during the 2016 Daejeon (Korea) and MU (Japan) radar campaign and data collected simultaneously by the Swarm satellites passed over Korea and Japan. PA approved #: 88ABW-2017-3641

  7. Effects of temperature and population density on von Bertalanffy growth parameters in Atlantic herring: a macro-ecological analysis

    NARCIS (Netherlands)

    Brunel, T.P.A.; Dickey-Collas, M.

    2010-01-01

    The effect of temperature and population density on the growth of Atlantic herring Clupea harengus was studied using a comparative approach applied to 15 North Atlantic populations. The von Bertalanffy (VB) equation was applied to describe mean growth of individuals in each population, both averaged

  8. Research in progress in applied mathematics, numerical analysis, fluid mechanics, and computer science

    Science.gov (United States)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  9. Quantitative Assessment of Mammary Gland Density in Rodents Using Digital Image Analysis

    Directory of Open Access Journals (Sweden)

    Thompson Henry J

    2011-06-01

    Full Text Available Abstract Background Rodent models have been used extensively to study mammary gland development and for studies of toxicology and carcinogenesis. Mammary gland gross morphology can visualized via the excision of intact mammary gland chains following fixation and staining with carmine using a tissue preparation referred to as a whole mount. Methods are described for the automated collection of digital images from an entire mammary gland whole mount and for the interrogation of digital data using a "masking" technique available with Image-Pro® plus image analysis software (Mediacybernetics. Silver Spring, MD. Results Parallel to mammographic analysis in humans, measurements of rodent mammary gland density were derived from area-based or volume-based algorithms and included: total circumscribed mammary fat pad mass, mammary epithelial mass, and epithelium-free fat pad mass. These values permitted estimation of absolute mass of mammary epithelium as well as breast density. The biological plausibility of these measurements was evaluated in mammary whole mounts from rats and mice. During mammary gland development, absolute epithelial mass increased linearly without significant changes in mammographic density. Treatment of rodents with tamoxifen, 9-cis-retinoic acid, or ovariectomy, and occurrence of diet induced obesity decreased both absolute epithelial mass and mammographic density. The area and volumetric methods gave similar results. Conclusions Digital image analysis can be used for screening agents for potential impact on reproductive toxicity or carcinogenesis as well as for mechanistic studies, particularly for cumulative effects on mammary epithelial mass as well as translational studies of mechanisms that explain the relationship between epithelial mass and cancer risk.

  10. Comparative analysis of several sediment transport formulations applied to dam-break flows over erodible beds

    Science.gov (United States)

    Cea, Luis; Bladé, Ernest; Corestein, Georgina; Fraga, Ignacio; Espinal, Marc; Puertas, Jerónimo

    2014-05-01

    Transitory flows generated by dam failures have a great sediment transport capacity, which induces important morphological changes on the river topography. Several studies have been published regarding the coupling between the sediment transport and hydrodynamic equations in dam-break applications, in order to correctly model their mutual interaction. Most of these models solve the depth-averaged shallow water equations to compute the water depth and velocity. On the other hand, a wide variety of sediment transport formulations have been arbitrarily used to compute the topography evolution. These are based on semi-empirical equations which have been calibrated under stationary and uniform conditions very different from those achieved in dam-break flows. Soares-Frazao et al. (2012) proposed a Benchmark test consisting of a dam-break over a mobile bed, in which several teams of modellers participated using different numerical models, and concluded that the key issue which still needs to be investigated in morphological modelling of dam-break flows is the link between the solid transport and the hydrodynamic variables. This paper presents a comparative analysis of different sediment transport formulations applied to dam-break flows over mobile beds. All the formulations analysed are commonly used in morphological studies in rivers, and include the formulas of Meyer-Peter & Müller (1948), Wong-Parker (2003), Einstein-Brown (1950), van Rijn (1984), Engelund-Hansen (1967), Ackers-White (1973), Yang (1973), and a Meyer-Peter & Müller type formula but with ad-hoc coefficients. The relevance of corrections on the sediment flux direction and magnitude due to the bed slope and the non-equilibrium hypothesis is also analysed. All the formulations have been implemented in the numerical model Iber (Bladé et al. (2014)), which solves the depth-averaged shallow water equations coupled to the Exner equation to evaluate the bed evolution. Two different test cases have been

  11. Determination of gas phase protein ion densities via ion mobility analysis with charge reduction.

    Science.gov (United States)

    Maisser, Anne; Premnath, Vinay; Ghosh, Abhimanyu; Nguyen, Tuan Anh; Attoui, Michel; Hogan, Christopher J

    2011-12-28

    We use a charge reduction electrospray (ESI) source and subsequent ion mobility analysis with a differential mobility analyzer (DMA, with detection via both a Faraday cage electrometer and a condensation particle counter) to infer the densities of single and multiprotein ions of cytochrome C, lysozyme, myoglobin, ovalbumin, and bovine serum albumin produced from non-denaturing (20 mM aqueous ammonium acetate) and denaturing (1 : 49.5 : 49.5, formic acid : methanol : water) ESI. Charge reduction is achieved through use of a Po-210 radioactive source, which generates roughly equal concentrations of positive and negative ions. Ions produced by the source collide with and reduce the charge on ESI generated drops, preventing Coulombic fissions, and unlike typical protein ESI, leading to gas-phase protein ions with +1 to +3 excess charges. Therefore, charge reduction serves to effectively mitigate any role that Coulombic stretching may play on the structure of the gas phase ions. Density inference is made via determination of the mobility diameter, and correspondingly the spherical equivalent protein volume. Through this approach it is found that for both non-denaturing and denaturing ESI-generated ions, gas-phase protein ions are relatively compact, with average densities of 0.97 g cm(-3) and 0.86 g cm(-3), respectively. Ions from non-denaturing ESI are found to be slightly more compact than predicted from the protein crystal structures, suggesting that low charge state protein ions in the gas phase are slightly denser than their solution conformations. While a slight difference is detected between the ions produced with non-denaturing and denaturing ESI, the denatured ions are found to be much more dense than those examined previously by drift tube mobility analysis, in which charge reduction was not employed. This indicates that Coulombic stretching is typically what leads to non-compact ions in the gas-phase, and suggests that for gas phase

  12. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  13. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    Science.gov (United States)

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-06-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  14. Gait analysis, bone and muscle density assessment for patients undergoing total hip arthroplasty

    Directory of Open Access Journals (Sweden)

    Benedikt Magnússon

    2012-12-01

    Full Text Available Total hip arthroplasty (THA is performed with or without the use of bone cement. Facing the lack of reliable clinical guidelines on decision making whether a patient should receive THA with or without bone cement, a joint clinical and engineering approach is proposed here with the objective to assess patient recovery developing monitoring techniques based on gait analysis, measurements of bone mineral density and structural and functional changes of quadriceps muscles. A clinical trial was conducted with 36 volunteer patients that were undergoing THA surgery for the first time: 18 receiving cemented implant and 18 receiving non-cemented implant. The patients are scanned with Computer Tomographic (CT modality prior-, immediately- and 12 months post-surgery. The CT data are further processed to segment muscles and bones for calculating bone mineral density (BMD. Quadriceps muscle density Hounsfield (HU based value is calculated from the segmented file on healthy and operated leg before and after THA surgery. Furthermore clinical assessment is performed using gait analysis technologies such as a sensing carpet, wireless electrodes and video. Patients undergo these measurements prior-, 6 weeks post - and 52 weeks post-surgery. The preliminary results indicate computational tools and methods that are able to quantitatively analyze patient’s condition pre and post-surgery: The spatial parameters such as step length and stride length increase 6 weeks post op in the patient group receiving cemented implant while the angle in the toe in/out parameter decrease in both patient groups.

  15. A Study in the Founding of Applied Behavior Analysis Through Its Publications

    Science.gov (United States)

    Morris, Edward K.; Altus, Deborah E.; Smith, Nathaniel G.

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research. PMID:25729133

  16. [Statistical analysis of articles in "Chinese journal of applied physiology" from 1999 to 2008].

    Science.gov (United States)

    Du, Fei; Fang, Tao; Ge, Xue-ming; Jin, Peng; Zhang, Xiao-hong; Sun, Jin-li

    2010-05-01

    To evaluate the academic level and influence of "Chinese Journal of Applied Physiology" through statistical analysis for the fund sponsored articles published in the recent ten years. The articles of "Chinese Journal of Applied Physiology" from 1999 to 2008 were investigated. The number and the percentage of the fund sponsored articles, the fund organization and the author region were quantitatively analyzed by using the literature metrology method. The number of the fund sponsored articles increased unceasingly. The ratio of the fund from local government significantly enhanced in the latter five years. Most of the articles were from institutes located at Beijing, Zhejiang and Tianjin. "Chinese Journal of Applied Physiology" has a fine academic level and social influence.

  17. A study in the founding of applied behavior analysis through its publications.

    Science.gov (United States)

    Morris, Edward K; Altus, Deborah E; Smith, Nathaniel G

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research.

  18. Response and reliability analysis of nonlinear uncertain dynamical structures by the probability density evolution method

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Peng, Yongbo; Sichani, Mahdi Teimouri

    2016-01-01

    The paper deals with the response and reliability analysis of hysteretic or geometric nonlinear uncertain dynamical systems of arbitrary dimensionality driven by stochastic processes. The approach is based on the probability density evolution method proposed by Li and Chen (Stochastic dynamics...... of structures, 1st edn. Wiley, London, 2009; Probab Eng Mech 20(1):33–44, 2005), which circumvents the dimensional curse of traditional methods for the determination of non-stationary probability densities based on Markov process assumptions and the numerical solution of the related Fokker–Planck and Kolmogorov......–Feller equations. The main obstacle of the method is that a multi-dimensional convolution integral needs to be carried out over the sample space of a set of basic random variables, for which reason the number of these need to be relatively low. In order to handle this problem an approach is suggested, which...

  19. Hardening and softening analysis of pure titanium based on the dislocation density during torsion deformation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Han; Li, Fuguo, E-mail: fuguolx@nwpu.edu.cn; Li, Jinghui; Ma, Xinkai; Li, Jiang; Wan, Qiong

    2016-08-01

    The hardening and softening phenomena during torsion deformation are studied based on the Taylor dislocation model for pure titanium. The hardening and softening phenomena are observed through the hardness analysis during micro-indentation test and micro-hardness test. Besides, the variations of indentation size also verify the existence of hardening and softening phenomena during torsion. The variations of geometric necessary dislocations (GNDs) and statistic store dislocations (SSDs) state that the positions of high dislocation density and low dislocation density correspond to the positions of hardening and softening. The results from the microstructure, grain boundaries evolution and twins analysis indicate the twins play an important role in appearance of hardening and softening phenomena. The appearance of hardening and softening phenomena are attributed to the combination of different slip systems and twinning systems combining with the Schmid Factor (SF) analysis and the transmission electron microscope (TEM). The appearance of hardening and softening phenomena can be explained by the Taylor dislocation theory based on TEM analysis. - Highlights: • The phenomena can be characterized by Taylor dislocation model. • The variation of GNDs leads to the phenomena. • The phenomena are proved by micro-hardness, indentation hardness. • The {10-12} twin and {11-24} twin play an important role in the phenomena.

  20. Reliability analysis of protection systems in NPP applying fault-tree analysis method

    International Nuclear Information System (INIS)

    Bokor, J.; Gaspar, P.; Hetthessy, J.; Szabo, G.

    1998-01-01

    This paper demonstrates the applicability and limits of dependability analysis in nuclear power plants (NPPS) based on the reactor protection refurbishment project (RRP) in NPP Paks. This paper illustrates case studies from the reliability analysis for NPP Paks. It also investigates the solutions for the connection between the data acquisition and subsystem control units (TSs) and the voter units (VTs), it analyzes the influence of the voting in the VT computer level, it studies the effects of the testing procedures to the dependability parameters. (author)

  1. Applying behavior analysis to school violence and discipline problems: Schoolwide positive behavior support

    Science.gov (United States)

    Anderson, Cynthia M.; Kincaid, Donald

    2005-01-01

    School discipline is a growing concern in the United States. Educators frequently are faced with discipline problems ranging from infrequent but extreme problems (e.g., shootings) to less severe problems that occur at high frequency (e.g., bullying, insubordination, tardiness, and fighting). Unfortunately, teachers report feeling ill prepared to deal effectively with discipline problems in schools. Further, research suggests that many commonly used strategies, such as suspension, expulsion, and other reactive strategies, are not effective for ameliorating discipline problems and may, in fact, make the situation worse. The principles and technology of behavior analysis have been demonstrated to be extremely effective for decreasing problem behavior and increasing social skills exhibited by school children. Recently, these principles and techniques have been applied at the level of the entire school, in a movement termed schoolwide positive behavior support. In this paper we review the tenets of schoolwide positive behavior support, demonstrating the relation between this technology and applied behavior analysis. PMID:22478439

  2. Common cause evaluations in applied risk analysis of nuclear power plants

    International Nuclear Information System (INIS)

    Taniguchi, T.; Ligon, D.; Stamatelatos, M.

    1983-04-01

    Qualitative and quantitative approaches were developed for the evaluation of common cause failures (CCFs) in nuclear power plants and were applied to the analysis of the auxiliary feedwater systems of several pressurized water reactors (PWRs). Key CCF variables were identified through a survey of experts in the field and a review of failure experience in operating PWRs. These variables were classified into categories of high, medium, and low defense against a CCF. Based on the results, a checklist was developed for analyzing CCFs of systems. Several known techniques for quantifying CCFs were also reviewed. The information provided valuable insights in the development of a new model for estimating CCF probabilities, which is an extension of and improvement over the Beta Factor method. As applied to the analysis of the PWR auxiliary feedwater systems, the method yielded much more realistic values than the original Beta Factor method for a one-out-of-three system

  3. Residual analysis applied to S-N data of a surface rolled cast iron

    Directory of Open Access Journals (Sweden)

    Omar Maluf

    2005-09-01

    Full Text Available Surface rolling is a process extensively employed in the manufacture of ductile cast iron crankshafts, specifically in regions containing stress concentrators with the main aim to enhance fatigue strength. Such process hardens and introduces compressive residual stresses to the surface as a result of controlled strains, reducing cyclic tensile stresses near the surface of the part. The main purpose of this work was to apply the residual analysis to check the suitability of the S-N approach to describe the fatigue properties of a surface rolled cast iron. The analysis procedure proved to be very efficient and easy to implement and it can be applied in the verification of any other statistical model used to describe fatigue behavior. Results show that the conventional S-N methodology is able to model the high cycle fatigue behavior of surface rolled notch testpieces of a pearlitic ductile cast iron submitted to rotating bending fatigue tests.

  4. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels

    Science.gov (United States)

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-01-01

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263

  5. Applying circular economy innovation theory in business process modeling and analysis

    Science.gov (United States)

    Popa, V.; Popa, L.

    2017-08-01

    The overall aim of this paper is to develop a new conceptual framework for business process modeling and analysis using circular economy innovative theory as a source for business knowledge management. The last part of the paper presents an author’s proposed basic structure for a new business models applying circular economy innovation theories. For people working on new innovative business models in the field of the circular economy this paper provides new ideas for clustering their concepts.

  6. Applied behavior analysis as intervention for autism: definition, features and philosophical concepts

    Directory of Open Access Journals (Sweden)

    Síglia Pimentel Höher Camargo

    2013-11-01

    Full Text Available Autism spectrum disorder (ASD is a lifelong pervasive developmental disorder with no known causes and cure. However, educational and behavioral interventions with a foundation in applied behavior analysis (ABA have been shown to improve a variety of skill areas such as communication, social, academic, and adaptive behaviors of individuals with ASD. The goal of this work is to present the definition, features and philosophical concepts that underlie ABA and make this science an effective intervention method for people with autism.

  7. Assimilation of tourism satellite accounts and applied general equilibrium models to inform tourism policy analysis

    OpenAIRE

    Rossouw, Riaan; Saayman, Melville

    2011-01-01

    Historically, tourism policy analysis in South Africa has posed challenges to accurate measurement. The primary reason for this is that tourism is not designated as an 'industry' in standard economic accounts. This paper therefore demonstrates the relevance and need for applied general equilibrium (AGE) models to be completed and extended through an integration with tourism satellite accounts (TSAs) as a tool for policy makers (especially tourism policy makers) in South Africa. The paper sets...

  8. Finite mixture model applied in the analysis of a turbulent bistable flow on two parallel circular cylinders

    Energy Technology Data Exchange (ETDEWEB)

    Paula, A.V. de, E-mail: vagtinski@mecanica.ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Möller, S.V., E-mail: svmoller@ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil)

    2013-11-15

    This paper presents a study of the bistable phenomenon which occurs in the turbulent flow impinging on circular cylinders placed side-by-side. Time series of axial and transversal velocity obtained with the constant temperature hot wire anemometry technique in an aerodynamic channel are used as input data in a finite mixture model, to classify the observed data according to a family of probability density functions. Wavelet transforms are applied to analyze the unsteady turbulent signals. Results of flow visualization show that the flow is predominantly two-dimensional. A double-well energy model is suggested to describe the behavior of the bistable phenomenon in this case. -- Highlights: ► Bistable flow on two parallel cylinders is studied with hot wire anemometry as a first step for the application on the analysis to tube bank flow. ► The method of maximum likelihood estimation is applied to hot wire experimental series to classify the data according to PDF functions in a mixture model approach. ► Results show no evident correlation between the changes of flow modes with time. ► An energy model suggests the presence of more than two flow modes.

  9. Classical linear-control analysis applied to business-cycle dynamics and stability

    Science.gov (United States)

    Wingrove, R. C.

    1983-01-01

    Linear control analysis is applied as an aid in understanding the fluctuations of business cycles in the past, and to examine monetary policies that might improve stabilization. The analysis shows how different policies change the frequency and damping of the economic system dynamics, and how they modify the amplitude of the fluctuations that are caused by random disturbances. Examples are used to show how policy feedbacks and policy lags can be incorporated, and how different monetary strategies for stabilization can be analytically compared. Representative numerical results are used to illustrate the main points.

  10. The Applied Behavior Analysis Research Paradigm and Single-Subject Designs in Adapted Physical Activity Research.

    Science.gov (United States)

    Haegele, Justin A; Hodge, Samuel Russell

    2015-10-01

    There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.

  11. Improving skill development: an exploratory study comparing a philosophical and an applied ethical analysis technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-09-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of ICT students and professionals. In particular the skill development focused on includes: being able to recognise ethical challenges and formulate coherent responses; distancing oneself from subjective judgements; developing ethical literacy; identifying stakeholders; and communicating ethical decisions made, to name a few.

  12. On the use of the term 'frequency' in applied behavior analysis.

    Science.gov (United States)

    Carr, James E; Nosik, Melissa R; Luke, Molli M

    2018-04-01

    There exists a terminological problem in applied behavior analysis: the term frequency has been used as a synonym for both rate (the number of responses per time) and count (the number of responses). To guide decisions about the use and meaning of frequency, we surveyed the usage of frequency in contemporary behavior-analytic journals and textbooks and found that the predominant usage of frequency was as count, not rate. Thus, we encourage behavior analysts to use frequency as a synonym for count. © 2018 Society for the Experimental Analysis of Behavior.

  13. The x-rays fluorescence applied to the analysis of alloys

    International Nuclear Information System (INIS)

    Gutierrez, D.A.

    1997-01-01

    This work is based on the utilization of the Fluorescence of X Rays. This technique of non destructive trial, has the purpose to establish a routine method, for the control of the conformation of industrial samples used. It makes an analysis with a combination of the algorithms of Rasberry-Heinrich and Claisse-Thinh. Besides, the numerical implementation of non usual techniques in this type of analysis. Such as the Linear Programming applied to the solution of super determined systems, of equations and the utilization of methods of relaxation to facilitate the convergence to the solutions. (author) [es

  14. Finite Element Analysis of Stress Distribution in Three Commonly Used Implant Systems in D2 and D4 Bone Densities

    Directory of Open Access Journals (Sweden)

    C Radha

    2016-01-01

    Materials and Methods : Pro-engineer 3-0 software was used to create the geometric models of the three implant systems (Nobel biocare, Biohorizon, Adin and two bone densities D2 and D4. Six 3D models were created to simulate each one of the three implant systems supporting a metal ceramic crown placed in two different densities of bone D2 and D4. The Poisson′s ratio(΅ and Youngs modulus(E of elasticity were assigned to different materials used for the models. Vertical and oblique loads of 450N each were applied on all six models. Von Mises stress analysis was done with ANSYS software. Results : Von Mises stresses were more within D4 type bone than D2 type, for all the three systems of implants and less stresses were seen in Biohorizon implant followed by Nobel Biocare and Adin implant particularly in D4 bone. Conclusion: The study concluded that the selection of a particular implant system should be based on the scientific research rather than on popularity.

  15. Fatigue Analysis of Tubesheet/Shell Juncture Applying the Mitigation Factor for Over-conservatism

    International Nuclear Information System (INIS)

    Kang, Deog Ji; Kim, Kyu Hyoung; Lee, Jae Gon

    2009-01-01

    If the environmental fatigue requirements are applied to the primary components of a nuclear power plant, to which the present ASME Code fatigue curves are applied, some locations with high level CUF (Cumulative Usage Factor) are anticipated not to meet the code criteria. The application of environmental fatigue damage is still particularly controversial for plants with 60-year design lives. Therefore, it is need to develop a detailed fatigue analysis procedure to identify the conservatisms in the procedure and to lower the cumulative usage factor. Several factors are being considered to mitigate the conservatism such as three-dimensional finite element modeling. In the present analysis, actual pressure transient data instead of conservative maximum and minimum pressure data was applied as one of mitigation factors. Unlike in the general method, individual transient events were considered instead of the grouped transient events. The tubesheet/shell juncture in the steam generator assembly is the one of the weak locations and was, therefore, selected as a target to evaluate the mitigation factor in the present analysis

  16. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    Science.gov (United States)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  17. Ongoing Analysis of Rocket Based Combined Cycle Engines by the Applied Fluid Dynamics Analysis Group at Marshall Space Flight Center

    Science.gov (United States)

    Ruf, Joseph; Holt, James B.; Canabal, Francisco

    1999-01-01

    This paper presents the status of analyses on three Rocket Based Combined Cycle configurations underway in the Applied Fluid Dynamics Analysis Group (TD64). TD64 is performing computational fluid dynamics analysis on a Penn State RBCC test rig, the proposed Draco axisymmetric RBCC engine and the Trailblazer engine. The intent of the analysis on the Penn State test rig is to benchmark the Finite Difference Navier Stokes code for ejector mode fluid dynamics. The Draco engine analysis is a trade study to determine the ejector mode performance as a function of three engine design variables. The Trailblazer analysis is to evaluate the nozzle performance in scramjet mode. Results to date of each analysis are presented.

  18. PRO-ELICERE: A Hazard Analysis Automation Process Applied to Space Systems

    Directory of Open Access Journals (Sweden)

    Tharcius Augusto Pivetta

    2016-07-01

    Full Text Available In the last decades, critical systems have increasingly been developed using computers and software even in space area, where the project approach is usually very conservative. In the projects of rockets, satellites and its facilities, like ground support systems, simulators, among other critical operations for the space mission, it must be applied a hazard analysis. The ELICERE process was created to perform a hazard analysis mainly over computer critical systems, in order to define or evaluate its safety and dependability requirements, strongly based on Hazards and Operability Study and Failure Mode and Effect Analysis techniques. It aims to improve the project design or understand the potential hazards of existing systems improving their functions related to functional or non-functional requirements. Then, the main goal of the ELICERE process is to ensure the safety and dependability goals of a space mission. The process, at the beginning, was created to operate manually in a gradual way. Nowadays, a software tool called PRO-ELICERE was developed, in such a way to facilitate the analysis process and store the results for reuse in another system analysis. To understand how ELICERE works and its tool, a small example of space study case was applied, based on a hypothetical rocket of the Cruzeiro do Sul family, developed by the Instituto de Aeronáutica e Espaço in Brazil.

  19. Sensitivity and uncertainty analysis applied to a repository in rock salt

    International Nuclear Information System (INIS)

    Polle, A.N.

    1996-12-01

    This document describes the sensitivity and uncertainty analysis with UNCSAM, as applied to a repository in rock salt for the EVEREST project. UNCSAM is a dedicated software package for sensitivity and uncertainty analysis, which was already used within the preceding PROSA project. The use of UNCSAM provides a flexible interface to EMOS ECN by substituting the sampled values in the various input files to be used by EMOS ECN ; the model calculations for this repository were performed with the EMOS ECN code. Preceding the sensitivity and uncertainty analysis, a number of preparations has been carried out to facilitate EMOS ECN with the probabilistic input data. For post-processing the EMOS ECN results, the characteristic output signals were processed. For the sensitivity and uncertainty analysis with UNCSAM the stochastic input, i.e. sampled values, and the output for the various EMOS ECN runs have been analyzed. (orig.)

  20. The evolution of applied harmonic analysis models of the real world

    CERN Document Server

    Prestini, Elena

    2016-01-01

    A sweeping exploration of the development and far-reaching applications of harmonic analysis such as signal processing, digital music, optics, radio astronomy, crystallography, medical imaging, spectroscopy, and more. Featuring a wealth of illustrations, examples, and material not found in other harmonic analysis books, this unique monograph skillfully blends together historical narrative with scientific exposition to create a comprehensive yet accessible work. While only an understanding of calculus is required to appreciate it, there are more technical sections that will charm even specialists in harmonic analysis. From undergraduates to professional scientists, engineers, and mathematicians, there is something for everyone here. The second edition of The Evolution of Applied Harmonic Analysis contains a new chapter on atmospheric physics and climate change, making it more relevant for today’s audience. Praise for the first edition: "…can be thoroughly recommended to any reader who is curious about the ...

  1. Neutron activation analysis as applied to instrumental analysis of trace elements from seawater

    International Nuclear Information System (INIS)

    Boniforti, R.; Moauro, A.; Madaro, M.

    1983-01-01

    Particulate matter collected from the coastal area delimited by the mouth of the river Volturno and the Sabaudia lake has been analyzed by instrumental neutron activation analysis for its content of twenty-two trace elements. The results for surface water and bottom water are reported separately, thus evidencing the effect of sampling depth on the concentration of many elements. The necessity of accurately 'cleaning' the filters before use is stressed

  2. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  3. Analysis of local dislocation densities in cold-rolled alloy 690 using transmission electron microscopy

    International Nuclear Information System (INIS)

    Ahn, Tae-Young; Kim, Sung Woo; Hwang, Seong Sik

    2016-01-01

    Service failure of alloy 690 in NPP has not been reported. However, some research groups reported that primary water stress corrosion cracking (PWSCC) occurred in severely cold-rolled alloy 690. Transgranular craking was also reported in coll-rolled alloy 690 with a banded structure. In order to understand the effect of cold rolling on the cracking of alloy 690, many research groups have focused on the local strain and the cracked carbide induced by cold-rolling, by using electron backscatter diffraction (EBSD). Transmission electron microscopy (TEM) has been widely used to characterize structural materials because this technique has superior spatial resolution and allows for the analysis of crystallographic and chemical information. The aim of the present study is to understand the mechanism of the abnormally high crack growth rate (CGR) in cold-rolled alloy 690 with a banded structure. The local dislocation density was measured by TEM to confirm the effects of local strain on the stress corrosion cracking (SCC) of alloy 690 with a banded structure. The effects of intragranular carbides on the SCC were also evaluated in this study. The local dislocation densities were directly measured using TEM to understand the effect of local strain on the SCC of Ni-based alloy 690 with a banded structure. The dislocation densities in the interior of the grains sharply increased in highly cold-rolled specimens due to intragranular carbide, which acted as a dislocation source

  4. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    KAUST Repository

    Mohamed, Mamdouh S.

    2015-05-18

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoretical analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kröner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.

  5. Seismic and structural analysis of high density/consolidated spent fuel storage racks

    International Nuclear Information System (INIS)

    Shah, S.J.; Biddle, J.R.; Bennett, S.M.; Schechter, C.B.; Harstead, G.A.; Kopecky, B.

    1995-01-01

    In many nuclear power plants, existing storage racks are being replaced with high-density racks to accommodate the increasing inventory of spent fuel. In the hypothetical design considered here, the high-density arrangement of fuel assemblies, or consolidated fuel canisters, is accomplished through the use of borated stainless steel (BSS) plates acting as neutron absorbers. The high-density fuel racks are simply supported by the pool floor with no structural connections to adjacent racks or to the pool walls or floor. Therefore, the racks are free standing and may slide and tip. Several time history, nonlinear, seismic analyses are required to account for variations in the coefficient of friction, rack loading configuration, ad the type of the seismic event. This paper presents several of the mathematical models usually used. The models include features to allow sliding and tipping of the racks and to represent the hydrodynamic coupling which can occur between fuel assemblies and rack cells, between adjacent racks, and between the racks and the reinforced concrete walls. A detailed model representing a single rack is used to evaluate the 3-D loading effects. This model is a controlling case for the stress analysis. A 2-D multi-rack model representing a row of racks between the spent fuel pool walls is used to evaluate the change in gaps between racks. The racks are analyzed for the fuel loading conditions of consolidated, full, empty, and half-loaded with fuel assemblies

  6. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    KAUST Repository

    Mohamed, Mamdouh S.; Larson, Ben C.; Tischler, Jon Z.; El-Azab, Anter

    2015-01-01

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoretical analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kröner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.

  7. Theoretical transport analysis of density limit with radial electric field in helical plasmas

    International Nuclear Information System (INIS)

    Toda, S.; Itoh, K.

    2010-11-01

    The confinement property in helical toroidal plasmas is clarified. The analysis is performed by use of the one-dimensional transport equations with the effect of the radiative loss and the radial profile of the electric field. The analytical results in the edge region show the steep gradient in the electron temperature, which indicates the transport barrier formation. Because of the rapid increase of the radiative loss at the low electron temperature, the anomalous heat diffusivity is reduced near the edge. Next, the efficiency of the heating power input in the presence of the radiative loss is studied. The scaling of the critical density in helical devices is also derived. (author)

  8. Accident Analysis of High Density Storage Rack for Fresh Fuel Assemblies

    International Nuclear Information System (INIS)

    Jang, K. J.; Lee, M. J.; Jin, H. U.; Park, J. H.; Shin, S. Y.

    2009-01-01

    Recently KONES and KNF have developed the so called suspension-type High Density Storage Rack (HDSR) for fresh fuel assemblies. The USNRC OT position paper specifies that the design of the rack must ensure the functional integrity of the fuel racks under all credible fuel assembly drop events. In this context the functional integrity means the criticality safety. That is to say, the drop events must not bring any danger to the criticality safety of HDSR. This paper shows the results of the analysis carried out to demonstrate the regulatory compliance of the proposed racks under postulated accidental drop events

  9. Breast cancer research output, 1945-2008: a bibliometric and density-equalizing analysis

    LENUS (Irish Health Repository)

    Glynn, Ronan W

    2010-12-22

    Abstract Introduction Breast cancer is the most common form of cancer among women, with an estimated 194,280 new cases diagnosed in the United States in 2009 alone. The primary aim of this work was to provide an in-depth evaluation of research yield in breast cancer from 1945 to 2008, using large-scale data analysis, the employment of bibliometric indicators of production and quality, and density-equalizing mapping. Methods Data were retrieved from the Web of Science (WOS) Science Citation Expanded database; this was searched using the Boolean operator, \\'OR\\

  10. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method

    International Nuclear Information System (INIS)

    Terra, Andre Miguel Barge Pontes Torres

    2005-01-01

    The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)

  11. New approach to detect and classify stroke in skull CT images via analysis of brain tissue densities.

    Science.gov (United States)

    Rebouças Filho, Pedro P; Sarmento, Róger Moura; Holanda, Gabriel Bandeira; de Alencar Lima, Daniel

    2017-09-01

    Cerebral vascular accident (CVA), also known as stroke, is an important health problem worldwide and it affects 16 million people worldwide every year. About 30% of those that have a stroke die and 40% remain with serious physical limitations. However, recovery in the damaged region is possible if treatment is performed immediately. In the case of a stroke, Computed Tomography (CT) is the most appropriate technique to confirm the occurrence and to investigate its extent and severity. Stroke is an emergency problem for which early identification and measures are difficult; however, computer-aided diagnoses (CAD) can play an important role in obtaining information imperceptible to the human eye. Thus, this work proposes a new method for extracting features based on radiological density patterns of the brain, called Analysis of Brain Tissue Density (ABTD). The proposed method is a specific approach applied to CT images to identify and classify the occurrence of stroke diseases. The evaluation of the results of the ABTD extractor proposed in this paper were compared with extractors already established in the literature, such as features from Gray-Level Co-Occurrence Matrix (GLCM), Local binary patterns (LBP), Central Moments (CM), Statistical Moments (SM), Hu's Moment (HM) and Zernike's Moments (ZM). Using a database of 420 CT images of the skull, each extractor was applied with the classifiers such as MLP, SVM, kNN, OPF and Bayesian to classify if a CT image represented a healthy brain or one with an ischemic or hemorrhagic stroke. ABTD had the shortest extraction time and the highest average accuracy (99.30%) when combined with OPF using the Euclidean distance. Also, the average accuracy values for all classifiers were higher than 95%. The relevance of the results demonstrated that the ABTD method is a useful algorithm to extract features that can potentially be integrated with CAD systems to assist in stroke diagnosis. Copyright © 2017 Elsevier B.V. All rights

  12. Harmonic analysis of the ionospheric electron densities retrieved from FORMOSAT-3/COSMIC radio occultation measurements

    Science.gov (United States)

    Masoumi, S.; Safari, A.; Sharifi, M.; Sam Khaniani, A.

    2011-12-01

    In order to investigate regular variations of the ionosphere, the least-squares harmonic estimation is applied to the time series of ionospheric electron densities in the region of Iran derived from about five years of Global Positioning System Radio Occultation (GPS RO) observations by FORMOSAT-3/COSMIC satellites. Although the obtained results are slightly different from the expected ones due to the low horizontal resolution of RO measurements, high vertical resolution of the observations enables us to detect not only the Total Electron Content (TEC) variations, but also periodic patterns of electron densities in different altitudes of the ionosphere. Dominant diurnal and annual signals, together with their Fourier series decompositions, and also periods close to 27 days are obtained, which is consistent with the previous analyses on TEC. In the equatorial anomaly band, the annual component is weaker than its Fourier decomposition periods. In particular, the semiannual period dominates the annual component, which is probably due to the effect of geomagnetic field. By the investigation of the frequencies at different local times, the semiannual signal is more significant than the annual one in the daytime, while the annual frequency is dominant at night. By the detection of the phases of the components, it is revealed that the annual signal has its maximum in summer at high altitudes, and in winter at lower altitudes. This suggests the effect of neutral compositions in the lower atmosphere. Further, the semiannual component peaks around equinox during the day, while its maximum mostly occurs in solstice at night. Since RO measurements can be used to derive TEC along the signal path between a GPS satellite and a receiver, study on the potentiality of using these observations for the prediction of electron densities and its application to the ionospheric correction of the single frequency receivers is suggested.

  13. Measurements and analysis of the noise spectral density of YBCO films

    International Nuclear Information System (INIS)

    Taoufik, A.; Bghour, M.; Labrag, A.; Bouaaddi, A.; Abaragh, A.; Ramzi, A.; Senoussi, S.; Tirbiyine, A.

    2006-01-01

    We studied the voltage noise spectral density S V (f,T,H) dependence on frequency f, temperature T and applied magnetic field H in YBa 2 Cu 3 O 7-δ films. The voltage noise spectral density S V (f) as a function of frequency exhibits 1/f behavior according to a Lorentzian shape A [(1+πf/f 0 ) B ] C , where A, f 0 , B, C constants which are determined and compared for many values of H. The influence of temperature and magnetic field is clearly observed as a function of temperature, S V (T) is found to vanish at T g , the temperature of vortex glass transition, according to a (T-T g ) x law. We interpreted those results by the formation of a large distribution of glass vortex domains. Approaching T g , these domains grow in size and lifetime, while at T g those parameters diverge. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  14. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  15. Density-viscosity product of small-volume ionic liquid samples using quartz crystal impedance analysis.

    Science.gov (United States)

    McHale, Glen; Hardacre, Chris; Ge, Rile; Doy, Nicola; Allen, Ray W K; MacInnes, Jordan M; Bown, Mark R; Newton, Michael I

    2008-08-01

    Quartz crystal impedance analysis has been developed as a technique to assess whether room-temperature ionic liquids are Newtonian fluids and as a small-volume method for determining the values of their viscosity-density product, rho eta. Changes in the impedance spectrum of a 5-MHz fundamental frequency quartz crystal induced by a water-miscible room-temperature ionic liquid, 1-butyl-3-methylimiclazolium trifluoromethylsulfonate ([C4mim][OTf]), were measured. From coupled frequency shift and bandwidth changes as the concentration was varied from 0 to 100% ionic liquid, it was determined that this liquid provided a Newtonian response. A second water-immiscible ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C4mim][NTf2], with concentration varied using methanol, was tested and also found to provide a Newtonian response. In both cases, the values of the square root of the viscosity-density product deduced from the small-volume quartz crystal technique were consistent with those measured using a viscometer and density meter. The third harmonic of the crystal was found to provide the closest agreement between the two measurement methods; the pure ionic liquids had the largest difference of approximately 10%. In addition, 18 pure ionic liquids were tested, and for 11 of these, good-quality frequency shift and bandwidth data were obtained; these 12 all had a Newtonian response. The frequency shift of the third harmonic was found to vary linearly with square root of viscosity-density product of the pure ionic liquids up to a value of square root(rho eta) approximately 18 kg m(-2) s(-1/2), but with a slope 10% smaller than that predicted by the Kanazawa and Gordon equation. It is envisaged that the quartz crystal technique could be used in a high-throughput microfluidic system for characterizing ionic liquids.

  16. How to: Using Mode Analysis to Quantify, Analyze, and Interpret the Mechanisms of High-Density Collective Motion

    Directory of Open Access Journals (Sweden)

    Arianna Bottinelli

    2017-12-01

    Full Text Available While methods from statistical mechanics were some of the earliest analytical tools used to understand collective motion, the field has substantially expanded in scope beyond phase transitions and fluctuating order parameters. In part, this expansion is driven by the increasing variety of systems being studied, which in turn, has increased the need for innovative approaches to quantify, analyze, and interpret a growing zoology of collective behaviors. For example, concepts from material science become particularly relevant when considering the collective motion that emerges at high densities. Here, we describe methods originally developed to study inert jammed granular materials that have been borrowed and adapted to study dense aggregates of active particles. This analysis is particularly useful because it projects difficult-to-analyze patterns of collective motion onto an easier-to-interpret set of eigenmodes. Carefully viewed in the context of non-equilibrium systems, mode analysis identifies hidden long-range motions and localized particle rearrangements based solely on the knowledge of particle trajectories. In this work, we take a “how to” approach and outline essential steps, diagnostics, and know-how used to apply this analysis to study densely-packed active systems.

  17. Statistical Techniques Applied to Aerial Radiometric Surveys (STAARS): cluster analysis. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Pirkle, F.L.; Stablein, N.K.; Howell, J.A.; Wecksung, G.W.; Duran, B.S.

    1982-11-01

    One objective of the aerial radiometric surveys flown as part of the US Department of Energy's National Uranium Resource Evaluation (NURE) program was to ascertain the regional distribution of near-surface radioelement abundances. Some method for identifying groups of observations with similar radioelement values was therefore required. It is shown in this report that cluster analysis can identify such groups even when no a priori knowledge of the geology of an area exists. A method of convergent k-means cluster analysis coupled with a hierarchical cluster analysis is used to classify 6991 observations (three radiometric variables at each observation location) from the Precambrian rocks of the Copper Mountain, Wyoming, area. Another method, one that combines a principal components analysis with a convergent k-means analysis, is applied to the same data. These two methods are compared with a convergent k-means analysis that utilizes available geologic knowledge. All three methods identify four clusters. Three of the clusters represent background values for the Precambrian rocks of the area, and one represents outliers (anomalously high 214 Bi). A segmentation of the data corresponding to geologic reality as discovered by other methods has been achieved based solely on analysis of aerial radiometric data. The techniques employed are composites of classical clustering methods designed to handle the special problems presented by large data sets. 20 figures, 7 tables

  18. Is whole body bone mineral density measured by the dual energy X-ray absorptiometry applied to evaluate risk of osteoporosis among Japanese adult females?

    International Nuclear Information System (INIS)

    Sakai, Yumiko; Koike, George; Numata, Makoto; Taneda, Kiyoshi; Jingu, Sumie

    2010-01-01

    The purpose of this study is to measure whole body fat accurately, the dual energy X-ray absorptiometry (DXA) is widely utilized. Simultaneously, bone mineral density (BMD) of the whole body can also be measured. BMD is one of important information to diagnose osteoporosis. However, it is not established to use whole body BMD for this diagnosis. It is recommended that lumbar and/or hip BMD should be used for diagnosing osteoporosis by the guideline for prevention and treatment of osteoporosis. Although it is possible to measure whole body BMD and lumbar and/or hip BMD separately at the same visit, it is inevitable to expose patients to more X-ray. Therefore, an aim of this study is to elucidate the relationship between whole body BMD and lumbar BMD to find the cut off point of whole body BMD for screening of osteoporosis. Two hundred and thirty six Japanese adult females were ascertained to this study. Whole body BMD and lumbar BMD of each subject were measured with the use of Delphi W (Hologic, USA). One hundred and sixty five subjects were judged as possible osteoporosis (less than 80% of young adult mean (YAM) of lumbar BMD and/or definite fracture of lumbar vertebras). The cut off point of whole body BMD for screening possible osteoporosis was estimated by receiver operated characteristic (ROC) analysis. The cut off point of whole body BMD was 84% of YAM, equivalent to 80% of YAM of lumbar BMD, with the following sensitivity and specificity (0.84 and 0.79, respectively), indicating that whole body BMD could be used for screening osteoporosis. (author)

  19. Applying computational geometry techniques for advanced feature analysis in atom probe data

    International Nuclear Information System (INIS)

    Felfer, Peter; Ceguerra, Anna; Ringer, Simon; Cairney, Julie

    2013-01-01

    In this paper we present new methods for feature analysis in atom probe tomography data that have useful applications in materials characterisation. The analysis works on the principle of Voronoi subvolumes and piecewise linear approximations, and feature delineation based on the distance to the centre of mass of a subvolume (DCOM). Based on the coordinate systems defined by these approximations, two examples are shown of the new types of analyses that can be performed. The first is the analysis of line-like-objects (i.e. dislocations) using both proxigrams and line-excess plots. The second is interfacial excess mapping of an InGaAs quantum dot. - Highlights: • Computational geometry is used to detect and analyse features within atom probe data. • Limitations of conventional feature detection are overcome by using atomic density gradients. • 0D, 1D, 2D and 3D features can be analysed by using Voronoi tessellation for spatial binning. • New, robust analysis methods are demonstrated, including line and interfacial excess mapping

  20. Simplified inelastic analysis methods applied to fast breeder reactor core design

    International Nuclear Information System (INIS)

    Abo-El-Ata, M.M.

    1978-01-01

    The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented

  1. Uncertainty and sensitivity analysis applied to coupled code calculations for a VVER plant transient

    International Nuclear Information System (INIS)

    Langenbuch, S.; Krzykacz-Hausmann, B.; Schmidt, K. D.

    2004-01-01

    The development of coupled codes, combining thermal-hydraulic system codes and 3D neutron kinetics, is an important step to perform best-estimate plant transient calculations. It is generally agreed that the application of best-estimate methods should be supplemented by an uncertainty and sensitivity analysis to quantify the uncertainty of the results. The paper presents results from the application of the GRS uncertainty and sensitivity method for a VVER-440 plant transient, which was already studied earlier for the validation of coupled codes. For this application, the main steps of the uncertainty method are described. Typical results of the method applied to the analysis of the plant transient by several working groups using different coupled codes are presented and discussed The results demonstrate the capability of an uncertainty and sensitivity analysis. (authors)

  2. An evaluation of an operating BWR piping system damping during earthquake by applying auto regressive analysis

    International Nuclear Information System (INIS)

    Kitada, Y.; Makiguchi, M.; Komori, A.; Ichiki, T.

    1985-01-01

    The records of three earthquakes which had induced significant earthquake response to the piping system were obtained with the earthquake observation system. In the present paper, first, the eigenvalue analysis results for the natural piping system based on the piping support (boundary) conditions are described and second, the frequency and the damping factor evaluation results for each vibrational mode are described. In the present study, the Auto Regressive (AR) analysis method is used in the evaluation of natural frequencies and damping factors. The AR analysis applied here has a capability of direct evaluation of natural frequencies and damping factors from earthquake records observed on a piping system without any information on the input motions to the system. (orig./HP)

  3. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    Science.gov (United States)

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  4. The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation

    Science.gov (United States)

    Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.

    1992-05-01

    Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.

  5. Applied statistical training to strengthen analysis and health research capacity in Rwanda.

    Science.gov (United States)

    Thomson, Dana R; Semakula, Muhammed; Hirschhorn, Lisa R; Murray, Megan; Ndahindwa, Vedaste; Manzi, Anatole; Mukabutera, Assumpta; Karema, Corine; Condo, Jeanine; Hedt-Gauthier, Bethany

    2016-09-29

    To guide efficient investment of limited health resources in sub-Saharan Africa, local researchers need to be involved in, and guide, health system and policy research. While extensive survey and census data are available to health researchers and program officers in resource-limited countries, local involvement and leadership in research is limited due to inadequate experience, lack of dedicated research time and weak interagency connections, among other challenges. Many research-strengthening initiatives host prolonged fellowships out-of-country, yet their approaches have not been evaluated for effectiveness in involvement and development of local leadership in research. We developed, implemented and evaluated a multi-month, deliverable-driven, survey analysis training based in Rwanda to strengthen skills of five local research leaders, 15 statisticians, and a PhD candidate. Research leaders applied with a specific research question relevant to country challenges and committed to leading an analysis to publication. Statisticians with prerequisite statistical training and experience with a statistical software applied to participate in class-based trainings and complete an assigned analysis. Both statisticians and research leaders were provided ongoing in-country mentoring for analysis and manuscript writing. Participants reported a high level of skill, knowledge and collaborator development from class-based trainings and out-of-class mentorship that were sustained 1 year later. Five of six manuscripts were authored by multi-institution teams and submitted to international peer-reviewed scientific journals, and three-quarters of the participants mentored others in survey data analysis or conducted an additional survey analysis in the year following the training. Our model was effective in utilizing existing survey data and strengthening skills among full-time working professionals without disrupting ongoing work commitments and using few resources. Critical to our

  6. Applied Behavior Analysis, Autism, and Occupational Therapy: A Search for Understanding.

    Science.gov (United States)

    Welch, Christie D; Polatajko, H J

    2016-01-01

    Occupational therapists strive to be mindful, competent practitioners and continuously look for ways to improve practice. Applied behavior analysis (ABA) has strong evidence of effectiveness in helping people with autism achieve goals, yet it does not seem to be implemented in occupational therapy practice. To better understand whether ABA could be an evidence-based option to expand occupational therapy practice, the authors conducted an iterative, multiphase investigation of relevant literature. Findings suggest that occupational therapists apply developmental and sensory approaches to autism treatment. The occupational therapy literature does not reflect any use of ABA despite its strong evidence base. Occupational therapists may currently avoid using ABA principles because of a perception that ABA is not client centered. ABA principles and occupational therapy are compatible, and the two could work synergistically. Copyright © 2016 by the American Occupational Therapy Association, Inc.

  7. Sensitivity analysis of crustal correction and its error propagation to upper mantle residual gravity and density anomalies

    DEFF Research Database (Denmark)

    Herceg, Matija; Artemieva, Irina; Thybo, Hans

    2013-01-01

    ) uncertainties in the velocity-density conversion and (ii) uncertainties in knowledge of the crustal structure (thickness and average Vp velocities of individual crustal layers, including the sedimentary cover). In this study, we address both sources of possible uncertainties by applying different conversions...... from velocity to density and by introducing variations into the crustal structure which corresponds to the uncertainty of its resolution by high-quality and low-quality seismic models. We examine the propagation of these uncertainties into determinations of lithospheric mantle density. The residual...

  8. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  9. Analysis of the Nevada-Applied-Ecology-Group model of transuranic radionuclide transport and dose

    International Nuclear Information System (INIS)

    Kercher, J.R.; Anspaugh, L.R.

    1991-01-01

    The authors analyze the model for estimating the dose from 239 Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the inhalation pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The inhalation pathway accounts for 100% of the dose to lung, upper respiratory tract and thoracic lymph nodes; and 95% of the dose to liver, bone, kidney and total body. The GI tract receives 99% of its dose via ingestion. Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5% ingestion of beef liver 4%; beef muscle 1%. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph nodes. (author)

  10. ELUCID—Exploring the Local Universe with the reConstructed Initial Density Field. II. Reconstruction Diagnostics, Applied to Numerical Halo Catalogs

    Energy Technology Data Exchange (ETDEWEB)

    Tweed, Dylan; Yang, Xiaohu; Li, Shijie; Jing, Y. P. [Center for Astronomy and Astrophysics, Shanghai Jiao Tong University, Shanghai 200240 (China); Wang, Huiyuan [Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026 (China); Cui, Weiguang [Departamento de Física Teórica, Módulo 15, Facultad de Ciencias, Universidad Autónoma de Madrid, E-28049 Madrid (Spain); Zhang, Youcai [Shanghai Astronomical Observatory, Nandan Road 80, Shanghai 200030 (China); Mo, H. J., E-mail: dtweed@sjtu.edu.cn [Department of Astronomy, University of Massachusetts, Amherst MA, 01003-9305 (United States)

    2017-05-20

    The ELUCID project aims to build a series of realistic cosmological simulations that reproduce the spatial and mass distributions of the galaxies as observed in the Sloan Digital Sky Survey. This requires powerful reconstruction techniques to create constrained initial conditions (ICs). We test the reconstruction method by applying it to several N -body simulations. We use two medium-resolution simulations, which each produced three additional constrained N -body simulations. We compare the resulting friend-of-friend catalogs by using the particle indexes as tracers, and quantify the quality of the reconstruction by varying the main smoothing parameter. The cross-identification method we use proves to be efficient, and the results suggest that the most massive reconstructed halos are effectively traced from the same Lagrangian regions in the ICs. A preliminary time-dependence analysis indicates that high-mass-end halos converge only at a redshift close to the reconstruction redshift. This suggests that, for earlier snapshots, only collections of progenitors may be effectively cross-identified.

  11. IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  12. IAEA-ASSET`s root cause analysis method applied to sodium leakage incident at Monju

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Norio; Hirano, Masashi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  13. Relative density: the key to stocking assessment in regional analysis—a forest survey viewpoint.

    Science.gov (United States)

    Colin D. MacLean

    1979-01-01

    Relative density is a measure of tree crowding compared to a reference level such as normal density. This stand attribute, when compared to management standards, indicates adequacy of stocking. The Pacific Coast Forest Survey Unit assesses the relative density of each stand sampled by summing the individual density contributions of each tree tallied, thus quantifying...

  14. Applying HAZOP analysis in assessing remote handling compatibility of ITER port plugs

    International Nuclear Information System (INIS)

    Duisings, L.P.M.; Til, S. van; Magielsen, A.J.; Ronden, D.M.S.; Elzendoorn, B.S.Q.; Heemskerk, C.J.M.

    2013-01-01

    Highlights: ► We applied HAZOP analysis to assess the criticality of remote handling maintenance activities on port plugs in the ITER Hot Cell facility. ► We identified several weak points in the general upper port plug maintenance concept. ► We made clear recommendations on redesign in port plug design, operational sequence and Hot Cell equipment. ► The use of a HAZOP approach for the ECH UL port can also be applied to ITER port plugs in general. -- Abstract: This paper describes the application of a Hazard and Operability Analysis (HAZOP) methodology in assessing the criticality of remote handling maintenance activities on port plugs in the ITER Hot Cell facility. As part of the ECHUL consortium, the remote handling team at the DIFFER Institute is developing maintenance tools and procedures for critical components of the ECH Upper launcher (UL). Based on NRG's experience with nuclear risk analysis and Hot Cell procedures, early versions of these tool concepts and maintenance procedures were subjected to a HAZOP analysis. The analysis identified several weak points in the general upper port plug maintenance concept and led to clear recommendations on redesigns in port plug design, the operational sequence and ITER Hot Cell equipment. The paper describes the HAZOP methodology and illustrates its application with specific procedures: the Steering Mirror Assembly (SMA) replacement and the exchange of the Mid Shield Optics (MSO) in the ECH UPL. A selection of recommended changes to the launcher design associated with the accessibility, maintainability and manageability of replaceable components are presented

  15. Point Analysis in Java applied to histological images of the perforant pathway: a user's account.

    Science.gov (United States)

    Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán

    2008-01-01

    The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.

  16. A comparative analysis of three metaheuristic methods applied to fuzzy cognitive maps learning

    Directory of Open Access Journals (Sweden)

    Bruno A. Angélico

    2013-12-01

    Full Text Available This work analyses the performance of three different population-based metaheuristic approaches applied to Fuzzy cognitive maps (FCM learning in qualitative control of processes. Fuzzy cognitive maps permit to include the previous specialist knowledge in the control rule. Particularly, Particle Swarm Optimization (PSO, Genetic Algorithm (GA and an Ant Colony Optimization (ACO are considered for obtaining appropriate weight matrices for learning the FCM. A statistical convergence analysis within 10000 simulations of each algorithm is presented. In order to validate the proposed approach, two industrial control process problems previously described in the literature are considered in this work.

  17. A risk analysis approach applied to field surveillance in utility meters in legal metrology

    Science.gov (United States)

    Rodrigues Filho, B. A.; Nonato, N. S.; Carvalho, A. D.

    2018-03-01

    Field surveillance represents the level of control in metrological supervision responsible for checking the conformity of measuring instruments in-service. Utility meters represent the majority of measuring instruments produced by notified bodies due to self-verification in Brazil. They play a major role in the economy once electricity, gas and water are the main inputs to industries in their production processes. Then, to optimize the resources allocated to control these devices, the present study applied a risk analysis in order to identify among the 11 manufacturers notified to self-verification, the instruments that demand field surveillance.

  18. Development and analysis of hydraulic-material transfer analysis code taking density current into account

    International Nuclear Information System (INIS)

    Saito, Hiroaki; Iriya, Yoshikazu

    1999-03-01

    It is an important issue to select site for the underground disposal of high level radioactive waste in a stable environment. Modelling of hydraulics in the freshwater/seawater boundaries is required. In this study, the analyzer code has been modified, in order to enable the analysis under more various conditions. Input/output functions were modified, after the functions of each module and major parameters were reconsidered. The modification included the change of input mode, from dialogue mode to file mode. Specifications of input/output and parameters are described. (A. Yamamoto)

  19. Mammographic density and ageing: A collaborative pooled analysis of cross-sectional data from 22 countries worldwide.

    Directory of Open Access Journals (Sweden)

    Anya Burton

    2017-06-01

    Full Text Available Mammographic density (MD is one of the strongest breast cancer risk factors. Its age-related characteristics have been studied in women in western countries, but whether these associations apply to women worldwide is not known.We examined cross-sectional differences in MD by age and menopausal status in over 11,000 breast-cancer-free women aged 35-85 years, from 40 ethnicity- and location-specific population groups across 22 countries in the International Consortium on Mammographic Density (ICMD. MD was read centrally using a quantitative method (Cumulus and its square-root metrics were analysed using meta-analysis of group-level estimates and linear regression models of pooled data, adjusted for body mass index, reproductive factors, mammogram view, image type, and reader. In all, 4,534 women were premenopausal, and 6,481 postmenopausal, at the time of mammography. A large age-adjusted difference in percent MD (PD between post- and premenopausal women was apparent (-0.46 cm [95% CI: -0.53, -0.39] and appeared greater in women with lower breast cancer risk profiles; variation across population groups due to heterogeneity (I2 was 16.5%. Among premenopausal women, the √PD difference per 10-year increase in age was -0.24 cm (95% CI: -0.34, -0.14; I2 = 30%, reflecting a compositional change (lower dense area and higher non-dense area, with no difference in breast area. In postmenopausal women, the corresponding difference in √PD (-0.38 cm [95% CI: -0.44, -0.33]; I2 = 30% was additionally driven by increasing breast area. The study is limited by different mammography systems and its cross-sectional rather than longitudinal nature.Declines in MD with increasing age are present premenopausally, continue postmenopausally, and are most pronounced over the menopausal transition. These effects were highly consistent across diverse groups of women worldwide, suggesting that they result from an intrinsic biological, likely hormonal, mechanism common to

  20. Wear analysis by applying a pin-disc configuration to phemoral head and acetabular cup

    Directory of Open Access Journals (Sweden)

    Guillermo Urriolagoitia-Calderón

    2008-01-01

    Full Text Available This work determines a prosthetic hip system’s life-span, focusing on a Mexican phenotype. The total sliding equivalent distance for the system was determined, as well as the loading regime under which the femoral component and the acetabular cup were subjected in normal operating conditions. An experimental tribology essay was then performed to simulate the wearing of the components in a Pin over Disc machine. This assay (for which the test specimens were manufactured in medical grade stainless steel AISI-ASTM 316L for the femoral component and high density polyethylene for the acetabular cup was aimed at simulating wear conditions involved in 10 years of continuous operation. A numerical simulation of operational conditions (using the finite element method was performedIn for establishing assay loading conditions to accurately determine where the loads should be applied. The tribology assay led to quantifying the volumetric loss of materials for the system being analysed. It can be concluded that the methodology proposed in this work for estimating the life-span of a prosthetic hip system was valid and accurate by comparing the results with those found in the literature. A statistical validation of the proposed method is plaaned for the future. Key words: Design life; femoral component; acetabular cup; Mexican phenotype; pin-disc configuration.

  1. Feasibility evaluation of two solar cooling systems applied to a cuban hotel. Comparative analysis

    International Nuclear Information System (INIS)

    Díaz Torres, Yamile; Valdivia Nodal, Yarelis; Monteagudo Yanes, José Pedro; Miranda Torres, Yudit

    2016-01-01

    The article presents an analysis of technical and economic feasibility of using two configurations of solar cooling in a Cuban hotel. HVAC hybrid schemes are: a cooler of ice water vapor compression (chiller) interconnected in parallel with a smaller capacity chiller, first with a solar-powered absorption cooling system (SACS), and then with a photovoltaic cooling system(PSC). Both were simulated taking into account the weather conditions in the region, thermodynamic calculation methodologies and principles that govern these technologies. The results show that the use of these alternatives contributes to reducing energy consumption and the environmental impact of heating, ventilation and air conditioning systems (HVAC). Economic analysis highlights that PCS is more favorable than the SACS taking into account the cooling cost generation (CCG) but energy assessment indicates that SACS has higher thermal performance for the case study to which it is applied. (author)

  2. Applying reliability analysis to design electric power systems for More-electric aircraft

    Science.gov (United States)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  3. An Appraisal of Social Network Theory and Analysis as Applied to Public Health: Challenges and Opportunities.

    Science.gov (United States)

    Valente, Thomas W; Pitts, Stephanie R

    2017-03-20

    The use of social network theory and analysis methods as applied to public health has expanded greatly in the past decade, yielding a significant academic literature that spans almost every conceivable health issue. This review identifies several important theoretical challenges that confront the field but also provides opportunities for new research. These challenges include (a) measuring network influences, (b) identifying appropriate influence mechanisms, (c) the impact of social media and computerized communications, (d) the role of networks in evaluating public health interventions, and (e) ethics. Next steps for the field are outlined and the need for funding is emphasized. Recently developed network analysis techniques, technological innovations in communication, and changes in theoretical perspectives to include a focus on social and environmental behavioral influences have created opportunities for new theory and ever broader application of social networks to public health topics.

  4. Escalation research: Providing new frontiers for applying behavior analysis to organizational behavior

    Science.gov (United States)

    Goltz, Sonia M.

    2000-01-01

    Decision fiascoes such as escalation of commitment, the tendency of decision makers to “throw good money after bad,” can have serious consequences for organizations and are therefore of great interest in applied research. This paper discusses the use of behavior analysis in organizational behavior research on escalation. Among the most significant aspects of behavior-analytic research on escalation is that it has indicated that both the patterns of outcomes that decision makers have experienced for past decisions and the patterns of responses that they make are critical for understanding escalation. This research has also stimulated the refinement of methods by researchers to better assess decision making and the role reinforcement plays in it. Finally, behavior-analytic escalation research has not only indicated the utility of reinforcement principles for predicting more complex human behavior but has also suggested some additional areas for future exploration of decision making using behavior analysis. PMID:22478347

  5. Digital image analysis applied to industrial nondestructive evaluation and automated parts assembly

    International Nuclear Information System (INIS)

    Janney, D.H.; Kruger, R.P.

    1979-01-01

    Many ideas of image enhancement and analysis are relevant to the needs of the nondestructive testing engineer. These ideas not only aid the engineer in the performance of his current responsibilities, they also open to him new areas of industrial development and automation which are logical extensions of classical testing problems. The paper begins with a tutorial on the fundamentals of computerized image enhancement as applied to nondestructive testing, then progresses through pattern recognition and automated inspection to automated, or robotic, assembly procedures. It is believed that such procedures are cost-effective in many instances, and are but the logical extension of those techniques now commonly used, but often limited to analysis of data from quality-assurance images. Many references are given in order to help the reader who wishes to pursue a given idea further

  6. Selection of Forklift Unit for Warehouse Operation by Applying Multi-Criteria Analysis

    Directory of Open Access Journals (Sweden)

    Predrag Atanasković

    2013-07-01

    Full Text Available This paper presents research related to the choice of the criteria that can be used to perform an optimal selection of the forklift unit for warehouse operation. The analysis has been done with the aim of exploring the requirements and defining relevant criteria that are important when investment decision is made for forklift procurement, and based on the conducted research by applying multi-criteria analysis, to determine the appropriate parameters and their relative weights that form the input data and database for selection of the optimal handling unit. This paper presents an example of choosing the optimal forklift based on the selected criteria for the purpose of making the relevant investment decision.

  7. Functional analysis and applied optimization in Banach spaces applications to non-convex variational models

    CERN Document Server

    Botelho, Fabio

    2014-01-01

    This book introduces the basic concepts of real and functional analysis. It presents the fundamentals of the calculus of variations, convex analysis, duality, and optimization that are necessary to develop applications to physics and engineering problems. The book includes introductory and advanced concepts in measure and integration, as well as an introduction to Sobolev spaces. The problems presented are nonlinear, with non-convex variational formulation. Notably, the primal global minima may not be attained in some situations, in which cases the solution of the dual problem corresponds to an appropriate weak cluster point of minimizing sequences for the primal one. Indeed, the dual approach more readily facilitates numerical computations for some of the selected models. While intended primarily for applied mathematicians, the text will also be of interest to engineers, physicists, and other researchers in related fields.

  8. Analysis of the concept of nursing educational technology applied to the patient

    Directory of Open Access Journals (Sweden)

    Aline Cruz Esmeraldo Áfio

    2014-04-01

    Full Text Available It is aimed at analyzing the concept of educational technology, produced by nursing, applied to the patient. Rodgers´ Evolutionary Method of Concept Analysis was used, identifying background, attributes and consequential damages. 13 articles were selected for analysis in which the background was identified: knowledge deficiency, shortage of nursing professionals' time, to optimize nursing work, the need to achieve the goals of the patients. Attributes: tool, strategy, innovative approach, pedagogical approach, mediator of knowledge, creative way to encourage the acquisition of skills, health production instrument. Consequences: to improve the quality of life, encouraging healthy behavior, empowerment, reflection and link. It emphasizes the importance of educational technologies for the care in nursing, to boost health education activities.

  9. DATE analysis: A general theory of biological change applied to microarray data.

    Science.gov (United States)

    Rasnick, David

    2009-01-01

    In contrast to conventional data mining, which searches for specific subsets of genes (extensive variables) to correlate with specific phenotypes, DATE analysis correlates intensive state variables calculated from the same datasets. At the heart of DATE analysis are two biological equations of state not dependent on genetic pathways. This result distinguishes DATE analysis from other bioinformatics approaches. The dimensionless state variable F quantifies the relative overall cellular activity of test cells compared to well-chosen reference cells. The variable pi(i) is the fold-change in the expression of the ith gene of test cells relative to reference. It is the fraction phi of the genome undergoing differential expression-not the magnitude pi-that controls biological change. The state variable phi is equivalent to the control strength of metabolic control analysis. For tractability, DATE analysis assumes a linear system of enzyme-connected networks and exploits the small average contribution of each cellular component. This approach was validated by reproducible values of the state variables F, RNA index, and phi calculated from random subsets of transcript microarray data. Using published microarray data, F, RNA index, and phi were correlated with: (1) the blood-feeding cycle of the malaria parasite, (2) embryonic development of the fruit fly, (3) temperature adaptation of Killifish, (4) exponential growth of cultured S. pneumoniae, and (5) human cancers. DATE analysis was applied to aCGH data from the great apes. A good example of the power of DATE analysis is its application to genomically unstable cancers, which have been refractory to data mining strategies. 2009 American Institute of Chemical Engineers Biotechnol.

  10. Adding value in oil and gas by applying decision analysis methodologies: case history

    Energy Technology Data Exchange (ETDEWEB)

    Marot, Nicolas [Petro Andina Resources Inc., Alberta (Canada); Francese, Gaston [Tandem Decision Solutions, Buenos Aires (Argentina)

    2008-07-01

    Petro Andina Resources Ltd. together with Tandem Decision Solutions developed a strategic long range plan applying decision analysis methodology. The objective was to build a robust and fully integrated strategic plan that accomplishes company growth goals to set the strategic directions for the long range. The stochastic methodology and the Integrated Decision Management (IDM{sup TM}) staged approach allowed the company to visualize the associated value and risk of the different strategies while achieving organizational alignment, clarity of action and confidence in the path forward. A decision team involving jointly PAR representatives and Tandem consultants was established to carry out this four month project. Discovery and framing sessions allow the team to disrupt the status quo, discuss near and far reaching ideas and gather the building blocks from which creative strategic alternatives were developed. A comprehensive stochastic valuation model was developed to assess the potential value of each strategy applying simulation tools, sensitivity analysis tools and contingency planning techniques. Final insights and results have been used to populate the final strategic plan presented to the company board providing confidence to the team, assuring that the work embodies the best available ideas, data and expertise, and that the proposed strategy was ready to be elaborated into an optimized course of action. (author)

  11. TRICARE Applied Behavior Analysis (ABA) Benefit: Comparison with Medicaid and Commercial Benefits.

    Science.gov (United States)

    Maglione, Margaret; Kadiyala, Srikanth; Kress, Amii; Hastings, Jaime L; O'Hanlon, Claire E

    2017-01-01

    This study compared the Applied Behavior Analysis (ABA) benefit provided by TRICARE as an early intervention for autism spectrum disorder with similar benefits in Medicaid and commercial health insurance plans. The sponsor, the Office of the Under Secretary of Defense for Personnel and Readiness, was particularly interested in how a proposed TRICARE reimbursement rate decrease from $125 per hour to $68 per hour for ABA services performed by a Board Certified Behavior Analyst compared with reimbursement rates (defined as third-party payment to the service provider) in Medicaid and commercial health insurance plans. Information on ABA coverage in state Medicaid programs was collected from Medicaid state waiver databases; subsequently, Medicaid provider reimbursement data were collected from state Medicaid fee schedules. Applied Behavior Analysis provider reimbursement in the commercial health insurance system was estimated using Truven Health MarketScan® data. A weighted mean U.S. reimbursement rate was calculated for several services using cross-state information on the number of children diagnosed with autism spectrum disorder. Locations of potential provider shortages were also identified. Medicaid and commercial insurance reimbursement rates varied considerably across the United States. This project concluded that the proposed $68-per-hour reimbursement rate for services provided by a board certified analyst was more than 25 percent below the U.S. mean.

  12. Topological data analysis (TDA) applied to reveal pedogenetic principles of European topsoil system.

    Science.gov (United States)

    Savic, Aleksandar; Toth, Gergely; Duponchel, Ludovic

    2017-05-15

    Recent developments in applied mathematics are bringing new tools that are capable to synthesize knowledge in various disciplines, and help in finding hidden relationships between variables. One such technique is topological data analysis (TDA), a fusion of classical exploration techniques such as principal component analysis (PCA), and a topological point of view applied to clustering of results. Various phenomena have already received new interpretations thanks to TDA, from the proper choice of sport teams to cancer treatments. For the first time, this technique has been applied in soil science, to show the interaction between physical and chemical soil attributes and main soil-forming factors, such as climate and land use. The topsoil data set of the Land Use/Land Cover Area Frame survey (LUCAS) was used as a comprehensive database that consists of approximately 20,000 samples, each described by 12 physical and chemical parameters. After the application of TDA, results obtained were cross-checked against known grouping parameters including five types of land cover, nine types of climate and the organic carbon content of soil. Some of the grouping characteristics observed using standard approaches were confirmed by TDA (e.g., organic carbon content) but novel subtle relationships (e.g., magnitude of anthropogenic effect in soil formation), were discovered as well. The importance of this finding is that TDA is a unique mathematical technique capable of extracting complex relations hidden in soil science data sets, giving the opportunity to see the influence of physicochemical, biotic and abiotic factors on topsoil formation through fresh eyes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Retail tobacco exposure: using geographic analysis to identify areas with excessively high retail density.

    Science.gov (United States)

    Rodriguez, Daniel; Carlos, Heather A; Adachi-Mejia, Anna M; Berke, Ethan M; Sargent, James

    2014-02-01

    There is great disparity in tobacco outlet density (TOD), with density highest in low-income areas and areas with greater proportions of minority residents, and this disparity may affect cancer incidence. We sought to better understand the nature of this disparity by assessing how these socio-demographic factors relate to TOD at the national level. Using mixture regression analysis and all of the nearly 65,000 census tracts in the contiguous United States, we aimed to determine the number of latent disparity classes by modeling the relations of proportions of Blacks, Hispanics, and families living in poverty with TOD, controlling for urban/rural status. We identified six disparity classes. There was considerable heterogeneity in relation to TOD for Hispanics in rural settings. For Blacks, there was no relation to TOD in an urban moderate disparity class, and for rural census tracts, the relation was highest in a moderate disparity class. We demonstrated the utility of classifying census tracts on heterogeneity of tobacco risk exposure. This approach provides a better understanding of the complexity of socio-demographic influences of tobacco retailing and creates opportunities for policy makers to more efficiently target areas in greatest need.

  14. Electron-density critical points analysis and catastrophe theory to forecast structure instability in periodic solids.

    Science.gov (United States)

    Merli, Marcello; Pavese, Alessandro

    2018-03-01

    The critical points analysis of electron density, i.e. ρ(x), from ab initio calculations is used in combination with the catastrophe theory to show a correlation between ρ(x) topology and the appearance of instability that may lead to transformations of crystal structures, as a function of pressure/temperature. In particular, this study focuses on the evolution of coalescing non-degenerate critical points, i.e. such that ∇ρ(x c ) = 0 and λ 1 , λ 2 , λ 3 ≠ 0 [λ being the eigenvalues of the Hessian of ρ(x) at x c ], towards degenerate critical points, i.e. ∇ρ(x c ) = 0 and at least one λ equal to zero. The catastrophe theory formalism provides a mathematical tool to model ρ(x) in the neighbourhood of x c and allows one to rationalize the occurrence of instability in terms of electron-density topology and Gibbs energy. The phase/state transitions that TiO 2 (rutile structure), MgO (periclase structure) and Al 2 O 3 (corundum structure) undergo because of pressure and/or temperature are here discussed. An agreement of 3-5% is observed between the theoretical model and experimental pressure/temperature of transformation.

  15. Clinical usefulness of the clock drawing test applying rasch analysis in predicting of cognitive impairment.

    Science.gov (United States)

    Yoo, Doo Han; Lee, Jae Shin

    2016-07-01

    [Purpose] This study examined the clinical usefulness of the clock drawing test applying Rasch analysis for predicting the level of cognitive impairment. [Subjects and Methods] A total of 187 stroke patients with cognitive impairment were enrolled in this study. The 187 patients were evaluated by the clock drawing test developed through Rasch analysis along with the mini-mental state examination of cognitive evaluation tool. An analysis of the variance was performed to examine the significance of the mini-mental state examination and the clock drawing test according to the general characteristics of the subjects. Receiver operating characteristic analysis was performed to determine the cutoff point for cognitive impairment and to calculate the sensitivity and specificity values. [Results] The results of comparison of the clock drawing test with the mini-mental state showed significant differences in according to gender, age, education, and affected side. A total CDT of 10.5, which was selected as the cutoff point to identify cognitive impairement, showed a sensitivity, specificity, Youden index, positive predictive, and negative predicive values of 86.4%, 91.5%, 0.8, 95%, and 88.2%. [Conclusion] The clock drawing test is believed to be useful in assessments and interventions based on its excellent ability to identify cognitive disorders.

  16. The Investigation of NPP Control and Monitoring Functional Analysis Applied to Functional Displays’ Implementation

    International Nuclear Information System (INIS)

    Yan, J.

    2015-01-01

    NPP Control and Monitoring System has been recognised as extreme and safe as well as large scale product, thus it was one of the most major design activities that fully, accurately and operationally functional analysis. The results of functional analysis would be employed as initial instruction through the whole lifecycle of NPP Control and Monitoring System. In this paper, it was discovered that several disadvantages of present functional analysis methods included FAST, The Subtract and Operate Procedure and Functional Procedure Method; owing to the identity methods enveloped here was the combination of Functional Tree and System Structure, as well as its decomposition steps; and RCS Inventory Control function which is defined as one of the most significant control functions in Advanced Light Water Reactor Utility Requirement Document has been employed to demonstrate the feasibility of this method; the analysis results of RCS Inventory function has been applied to direct the design and implementation of related displays, here the functional display of RCS Inventory Control function has been implemented on NuCON which is originated by SNPAS. Owing to the analyzing results, it would be ensured that the accuracy of information displayed to operators, thus the operator would be aware the condition of systems and then make the proper move to ensure the safety and productivity of NPP based on the received data. (author)

  17. Cloud Computing and Internet of Things Concepts Applied on Buildings Data Analysis

    Directory of Open Access Journals (Sweden)

    Hebean Florin-Adrian

    2017-12-01

    Full Text Available Used and developed initially for the IT industry, the Cloud computing and Internet of Things concepts are found at this moment in a lot of sectors of activity, building industry being one of them. These are defined like a global computing, monitoring and analyze network, which is composed of hardware and software resources, with the feature of allocating and dynamically relocating the shared resources, in accordance with user requirements. Data analysis and process optimization techniques based on these new concepts are used increasingly more in the buildings industry area, especially for an optimal operations of the buildings installations and also for increasing occupants comfort. The multitude of building data taken from HVAC sensor, from automation and control systems and from the other systems connected to the network are optimally managed by these new analysis techniques. Through analysis techniques can be identified and manage the issues the arise in operation of building installations like critical alarms, nonfunctional equipment, issues regarding the occupants comfort, for example the upper and lower temperature deviation to the set point and other issues related to equipment maintenance. In this study, a new approach regarding building control is presented and also a generalized methodology for applying data analysis to building services data is described. This methodology is then demonstrated using two case studies.

  18. Natural bond orbital analysis, electronic structure and vibrational spectral analysis of N-(4-hydroxyl phenyl) acetamide: A density functional theory

    Science.gov (United States)

    Govindasamy, P.; Gunasekaran, S.; Ramkumaar, G. R.

    2014-09-01

    The Fourier transform infrared (FT-IR) and FT-Raman spectra of N-(4-hydroxy phenyl) acetamide (N4HPA) of painkiller agent were recorded in the region 4000-450 cm-1 and 4000-50 cm-1 respectively. Density functional theory (DFT) has been used to calculate the optimized geometrical parameter, atomic charges, and vibrational wavenumbers and intensity of the vibrational bands. The computed vibrational wave numbers were compared with the FT-IR and FT-Raman experimental data. The computational calculations at DFT/B3LYP level with 6-31G(d,p), 6-31++G(d,p), 6-311G(d,p) and 6-311++G(d,p) basis sets. The complete vibrational assignments were performed on the basis of the potential energy distribution (PED) of the vibrational modes calculated using Vibrational energy distribution analysis (VEDA 4) program. The oscillator’s strength calculated by TD-DFT and N4HPA is approach complement with the experimental findings. The NMR chemical shifts 13C and 1H were recorded and calculated using the gauge independent atomic orbital (GIAO) method. The molecular electrostatic potential (MESP) and electron density surfaces of the molecule were constructed. The Natural charges and intermolecular contacts have been interpreted using Natural Bond orbital (NBO) analysis the HOMO-LUMO energy gap has been calculated. The thermodynamic properties like entropy, heat capacity and zero vibrational energy have been calculated.

  19. Roughness analysis applied to niobium thin films grown on MgO(001) surfaces for superconducting radio frequency cavity applications

    Science.gov (United States)

    Beringer, D. B.; Roach, W. M.; Clavero, C.; Reece, C. E.; Lukaszew, R. A.

    2013-02-01

    This paper describes surface studies to address roughness issues inherent to thin film coatings deposited onto superconducting radio frequency (SRF) cavities. This is particularly relevant for multilayered thin film coatings that are being considered as a possible scheme to overcome technical issues and to surpass the fundamental limit of ˜50MV/m accelerating gradient achievable with bulk niobium. In 2006, a model by Gurevich [Appl. Phys. Lett. 88, 012511 (2006)APPLAB0003-695110.1063/1.2162264] was proposed to overcome this limit that involves coating superconducting layers separated by insulating ones onto the inner walls of the cavities. Thus, we have undertaken a systematic effort to understand the dynamic evolution of the Nb surface under specific deposition thin film conditions onto an insulating surface in order to explore the feasibility of the proposed model. We examine and compare the morphology from two distinct Nb/MgO series, each with its own epitaxial registry, at very low growth rates and closely examine the dynamical scaling of the surface features during growth. Further, we apply analysis techniques such as power spectral density to the specific problem of thin film growth and roughness evolution to qualify the set of deposition conditions that lead to successful SRF coatings.

  20. Roughness analysis applied to niobium thin films grown on MgO(001 surfaces for superconducting radio frequency cavity applications

    Directory of Open Access Journals (Sweden)

    D. B. Beringer

    2013-02-01

    Full Text Available This paper describes surface studies to address roughness issues inherent to thin film coatings deposited onto superconducting radio frequency (SRF cavities. This is particularly relevant for multilayered thin film coatings that are being considered as a possible scheme to overcome technical issues and to surpass the fundamental limit of ∼50  MV/m accelerating gradient achievable with bulk niobium. In 2006, a model by Gurevich [Appl. Phys. Lett. 88, 012511 (2006APPLAB0003-695110.1063/1.2162264] was proposed to overcome this limit that involves coating superconducting layers separated by insulating ones onto the inner walls of the cavities. Thus, we have undertaken a systematic effort to understand the dynamic evolution of the Nb surface under specific deposition thin film conditions onto an insulating surface in order to explore the feasibility of the proposed model. We examine and compare the morphology from two distinct Nb/MgO series, each with its own epitaxial registry, at very low growth rates and closely examine the dynamical scaling of the surface features during growth. Further, we apply analysis techniques such as power spectral density to the specific problem of thin film growth and roughness evolution to qualify the set of deposition conditions that lead to successful SRF coatings.

  1. Roughness analysis applied to niobium thin films grown on MgO(001) surfaces for superconducting radio frequency cavity applications

    Energy Technology Data Exchange (ETDEWEB)

    Beringer, D. B. [College of William and Mary, Williamsburg, VA (United States). Dept. of Physics; Roach, W. M. [College of William and Mary, Williamsburg, VA (United States). Dept. of Applied Science; Clavero, C. [College of William and Mary, Williamsburg, VA (United States). Dept. of Applied Science; Reece, C. E. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Lukaszew, R. A. [College of William and Mary, Williamsburg, VA (United States). Dept. of Physics; College of William and Mary, Williamsburg, VA (United States). Dept. of Applied Science

    2013-02-05

    This paper describes surface studies to address roughness issues inherent to thin film coatings deposited onto superconducting radio frequency (SRF) cavities. This is particularly relevant for multilayered thin film coatings that are being considered as a possible scheme to overcome technical issues and to surpass the fundamental limit of ~500 MV/m accelerating gradient achievable with bulk niobium. In 2006, a model by Gurevich [ Appl. Phys. Lett. 88 012511 (2006)] was proposed to overcome this limit that involves coating superconducting layers separated by insulating ones onto the inner walls of the cavities. Thus, we have undertaken a systematic effort to understand the dynamic evolution of the Nb surface under specific deposition thin film conditions onto an insulating surface in order to explore the feasibility of the proposed model. We examine and compare the morphology from two distinct Nb/MgO series, each with its own epitaxial registry, at very low growth rates and closely examine the dynamical scaling of the surface features during growth. Further, we apply analysis techniques such as power spectral density to the specific problem of thin film growth and roughness evolution to qualify the set of deposition conditions that lead to successful SRF coatings.

  2. PERFORMANCE OPTIMIZATION OF LINEAR INDUCTION MOTOR BY EDDY CURRENT AND FLUX DENSITY DISTRIBUTION ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. S. MANNA

    2011-12-01

    Full Text Available The development of electromagnetic devices as machines, transformers, heating devices confronts the engineers with several problems. For the design of an optimized geometry and the prediction of the operational behaviour an accurate knowledge of the dependencies of the field quantities inside the magnetic circuits is necessary. This paper provides the eddy current and core flux density distribution analysis in linear induction motor. Magnetic flux in the air gap of the Linear Induction Motor (LIM is reduced to various losses such as end effects, fringes, effect, skin effects etc. The finite element based software package COMSOL Multiphysics Inc. USA is used to get the reliable and accurate computational results for optimization the performance of Linear Induction Motor (LIM. The geometrical characteristics of LIM are varied to find the optimal point of thrust and minimum flux leakage during static and dynamic conditions.

  3. Porous Au-Ag Nanospheres with High-Density and Highly Accessible Hotspots for SERS Analysis.

    Science.gov (United States)

    Liu, Kai; Bai, Yaocai; Zhang, Lei; Yang, Zhongbo; Fan, Qikui; Zheng, Haoquan; Yin, Yadong; Gao, Chuanbo

    2016-06-08

    Colloidal plasmonic metal nanoparticles have enabled surface-enhanced Raman scattering (SERS) for a variety of analytical applications. While great efforts have been made to create hotspots for amplifying Raman signals, it remains a great challenge to ensure their high density and accessibility for improved sensitivity of the analysis. Here we report a dealloying process for the fabrication of porous Au-Ag alloy nanoparticles containing abundant inherent hotspots, which were encased in ultrathin hollow silica shells so that the need of conventional organic capping ligands for stabilization is eliminated, producing colloidal plasmonic nanoparticles with clean surface and thus high accessibility of the hotspots. As a result, these novel nanostructures show excellent SERS activity with an enhancement factor of ∼1.3 × 10(7) on a single particle basis (off-resonant condition), promising high applicability in many SERS-based analytical and biomedical applications.

  4. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  5. Analysis of computed tomography density of liver before and after amiodarone administration.

    Science.gov (United States)

    Matsuda, Masazumi; Otaka, Aoi; Tozawa, Tomoki; Asano, Tomoyuki; Ishiyama, Koichi; Hashimoto, Manabu

    2018-05-01

    To evaluate CT density of liver changes between before and after amiodarone administration. Twenty-five patients underwent non-enhanced CT including the liver before and after amiodarone administration. We set regions of interest (ROIs) at liver S8, spleen, paraspinal muscle, and calculated average CT density in these ROIs, then compared CT density between liver and other organs. Statistical differences between CT density of liver and various ratios before and after administration were determined, along with correlations between cumulative dose of amiodarone and liver density after administration, density change of liver, and various ratios after administration. Liver density, liver-to-spleen ratio, and liver-to-paraspinal muscle ratio differed significantly between before and after amiodarone administration. No significant correlations were found between cumulative doses of amiodarone and any of liver density after administration, density change of liver, or various ratios after administration. CT density of liver after amiodarone administration was significantly higher than that before administration. No correlations were identified between cumulative dose of amiodarone and either liver density after administration or density change of liver. Amiodarone usage should be checked when radiologists identify high density of the liver on CT.

  6. IMPORTANCE OF APPLYING DATA ENVELOPMENT ANALYSIS IN CASE OF HIGHER EDUCATIONAL INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Labas Istvan

    2015-07-01

    Full Text Available Today, the saying predominates better and better according to which a strong target rationalism has to characterize the higher educational institutions due to the scarce resources and the limitlessness of user demands. Now in Hungary, the funding of higher educational system goes through a total transformation thus the leadership has continuously to reckon with the changes of environment and, in tune with these ones, has to modify the goals existing already. Nowadays, it becomes more and more important to measure the effectiveness of the organizations – organizational units pursuing the same or similar activities relative to each other. Benchmarking helps this procedure. Benchmarking is none other than such a tool of analysis and planning which allows comparing the organizations with the best of the competitors. Applying the method with regard to the higher educational institutions is really nothing but a procedure which focuses on comparing processes and results of the institutions’ different functional areas in order to bring to light the opportunities for the rationality as well as the quality and performance improvement. Those benefits could be managed and used as breakthrough possibilities which have been already developed/applied by other organizations and are given by the way leading to a more effective management.The main goal of my monograph is to show a kind of application of Data Envelopment Analysis (DEA method in the higher education. DEA itself is a performance measuring methodology which is a part of benchmarking and uses the linear programming as a method. By means of its application, the effectiveness of different decision-making units can be compared numerically. In our forcefully varying environment, the managerial decision making can be largely supported in each case by such information that is numerically able to identify which organizational units and activities are effective or less effective. Its advantage is that

  7. Analysis of the star formation histories of galaxies in different environments: from low to high density

    Science.gov (United States)

    Ortega-Minakata, René A.

    2015-11-01

    In this thesis, a value-added cataloge of 403,372 SDSS-DR7 galaxies is presented. This catalogue incorporates information on their stellar populations, including their star formation histories, their dominant emission-line activity type, inferred morphology and a measurement of their environmental density. The sample that formed this catalogue was selected from the SDSS-DR7 (Legacy) spectroscopic catalogue of galaxies in the Northern Galactic Cap, selecting only galaxies with high-quality spectra and redshift determination, and photometric measurements with small errors. Also, galaxies near the edge of the photometric survey footprint were excluded to avoid errors in the determination of their environment. Only galaxies in the 0.03-0.30 redshift range were considered. Starlight fits of the spectra of these galaxies were used to obtain information on their star formation history and stellar mass, velocity dispersion and mean age. From the fit residuals, emission-line fluxes were measured and used to obtain the dominant activity type of these galaxies using the BPT diagnostic diagram. A neighbour search code was written and applied to the catalogue to measure the local environmental density of these galaxies. This code counts the number of neighbours within a fixed search radius and a radial velocity range centered at each galaxy's radial velocity. A projected radius of 1.5 Mpc and a range of ± 2,500 km/s, both centered at the redshift of the target galaxy, were used to search and count all the neighbours of each galaxy in the catalogue. The neighbours were counted from the photometric catalogue of the SDSS-DR7 using photometric redshifts, to avoid incompleteness of the spectroscopic catalogue. The morphology of the galaxies in the catalogue was inferred by inverting previously found relations between subsamples of galaxies with visual morphology classification and their optical colours and concentration of light. The galaxies in the catalogue were matched to six

  8. Uncertainty analysis in comparative NAA applied to geological and biological matrices

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Ticianelli, Regina B.; Lange, Camila N.; Favaro, Deborah I.T.; Figueiredo, Ana M.G., E-mail: gzahn@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Comparative nuclear activation analysis is a multielemental primary analytical technique that may be used in a rather broad spectrum of matrices with minimal-to-none sample preprocessing. Although the total activation of a chemical element in a sample depends on a rather large set of parameters, when the sample is irradiated together with a well-known comparator, most of these parameters are crossed out and the concentration of that element can be determined simply by using the activities and masses of the comparator and the sample, the concentration of this chemical element in the sample, the half-life of the formed radionuclide and the time between counting the sample and the comparator. This simplification greatly reduces not only the calculations required, but also the uncertainty associated with the measurement; nevertheless, a cautious analysis must be carried out in order to make sure all relevant uncertainties are properly treated, so that the final result can be as representative of the measurement as possible. In this work, this analysis was performed for geological matrices, where concentrations of the interest nuclides are rather high, but so is the density and average atomic number of the sample, as well as for a biological matrix, in order to allow for a comparison. The results show that the largest part of the uncertainty comes from the activity measurements and from the concentration of the comparator, and that while the influence of time-related terms in the final uncertainty can be safely neglected, the uncertainty in the masses may be relevant under specific circumstances. (author)

  9. Uncertainty analysis in comparative NAA applied to geological and biological matrices

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Ticianelli, Regina B.; Lange, Camila N.; Favaro, Deborah I.T.; Figueiredo, Ana M.G.

    2015-01-01

    Comparative nuclear activation analysis is a multielemental primary analytical technique that may be used in a rather broad spectrum of matrices with minimal-to-none sample preprocessing. Although the total activation of a chemical element in a sample depends on a rather large set of parameters, when the sample is irradiated together with a well-known comparator, most of these parameters are crossed out and the concentration of that element can be determined simply by using the activities and masses of the comparator and the sample, the concentration of this chemical element in the sample, the half-life of the formed radionuclide and the time between counting the sample and the comparator. This simplification greatly reduces not only the calculations required, but also the uncertainty associated with the measurement; nevertheless, a cautious analysis must be carried out in order to make sure all relevant uncertainties are properly treated, so that the final result can be as representative of the measurement as possible. In this work, this analysis was performed for geological matrices, where concentrations of the interest nuclides are rather high, but so is the density and average atomic number of the sample, as well as for a biological matrix, in order to allow for a comparison. The results show that the largest part of the uncertainty comes from the activity measurements and from the concentration of the comparator, and that while the influence of time-related terms in the final uncertainty can be safely neglected, the uncertainty in the masses may be relevant under specific circumstances. (author)

  10. UNCERT: geostatistics, uncertainty analysis and visualization software applied to groundwater flow and contaminant transport modeling

    International Nuclear Information System (INIS)

    Wingle, W.L.; Poeter, E.P.; McKenna, S.A.

    1999-01-01

    UNCERT is a 2D and 3D geostatistics, uncertainty analysis and visualization software package applied to ground water flow and contaminant transport modeling. It is a collection of modules that provides tools for linear regression, univariate statistics, semivariogram analysis, inverse-distance gridding, trend-surface analysis, simple and ordinary kriging and discrete conditional indicator simulation. Graphical user interfaces for MODFLOW and MT3D, ground water flow and contaminant transport models, are provided for streamlined data input and result analysis. Visualization tools are included for displaying data input and output. These include, but are not limited to, 2D and 3D scatter plots, histograms, box and whisker plots, 2D contour maps, surface renderings of 2D gridded data and 3D views of gridded data. By design, UNCERT's graphical user interface and visualization tools facilitate model design and analysis. There are few built in restrictions on data set sizes and each module (with two exceptions) can be run in either graphical or batch mode. UNCERT is in the public domain and is available from the World Wide Web with complete on-line and printable (PDF) documentation. UNCERT is written in ANSI-C with a small amount of FORTRAN77, for UNIX workstations running X-Windows and Motif (or Lesstif). This article discusses the features of each module and demonstrates how they can be used individually and in combination. The tools are applicable to a wide range of fields and are currently used by researchers in the ground water, mining, mathematics, chemistry and geophysics, to name a few disciplines. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  11. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  12. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  13. On the Formal-Logical Analysis of the Foundations of Mathematics Applied to Problems in Physics

    Science.gov (United States)

    Kalanov, Temur Z.

    2016-03-01

    Analysis of the foundations of mathematics applied to problems in physics was proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. It is shown that critical analysis of the concept of mathematical quantity - central concept of mathematics - leads to the following conclusion: (1) The concept of ``mathematical quantity'' is the result of the following mental operations: (a) abstraction of the ``quantitative determinacy of physical quantity'' from the ``physical quantity'' at that the ``quantitative determinacy of physical quantity'' is an independent object of thought; (b) abstraction of the ``amount (i.e., abstract number)'' from the ``quantitative determinacy of physical quantity'' at that the ``amount (i.e., abstract number)'' is an independent object of thought. In this case, unnamed, abstract numbers are the only sign of the ``mathematical quantity''. This sign is not an essential sign of the material objects. (2) The concept of mathematical quantity is meaningless, erroneous, and inadmissible concept in science because it represents the following formal-logical and dialectical-materialistic error: negation of the existence of the essential sign of the concept (i.e., negation of the existence of the essence of the concept) and negation of the existence of measure of material object.

  14. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  15. Reflections of the social environment in chimpanzee memory: applying rational analysis beyond humans.

    Science.gov (United States)

    Stevens, Jeffrey R; Marewski, Julian N; Schooler, Lael J; Gilby, Ian C

    2016-08-01

    In cognitive science, the rational analysis framework allows modelling of how physical and social environments impose information-processing demands onto cognitive systems. In humans, for example, past social contact among individuals predicts their future contact with linear and power functions. These features of the human environment constrain the optimal way to remember information and probably shape how memory records are retained and retrieved. We offer a primer on how biologists can apply rational analysis to study animal behaviour. Using chimpanzees ( Pan troglodytes ) as a case study, we modelled 19 years of observational data on their social contact patterns. Much like humans, the frequency of past encounters in chimpanzees linearly predicted future encounters, and the recency of past encounters predicted future encounters with a power function. Consistent with the rational analyses carried out for human memory, these findings suggest that chimpanzee memory performance should reflect those environmental regularities. In re-analysing existing chimpanzee memory data, we found that chimpanzee memory patterns mirrored their social contact patterns. Our findings hint that human and chimpanzee memory systems may have evolved to solve similar information-processing problems. Overall, rational analysis offers novel theoretical and methodological avenues for the comparative study of cognition.

  16. A Monte Carlo error simulation applied to calibration-free X-ray diffraction phase analysis

    International Nuclear Information System (INIS)

    Braun, G.E.

    1986-01-01

    Quantitative phase analysis of a system of n phases can be effected without the need for calibration standards provided at least n different mixtures of these phases are available. A series of linear equations relating diffracted X-ray intensities, weight fractions and quantitation factors coupled with mass balance relationships can be solved for the unknown weight fractions and factors. Uncertainties associated with the measured X-ray intensities, owing to counting of random X-ray quanta, are used to estimate the errors in the calculated parameters utilizing a Monte Carlo simulation. The Monte Carlo approach can be generalized and applied to any quantitative X-ray diffraction phase analysis method. Two examples utilizing mixtures of CaCO 3 , Fe 2 O 3 and CaF 2 with an α-SiO 2 (quartz) internal standard illustrate the quantitative method and corresponding error analysis. One example is well conditioned; the other is poorly conditioned and, therefore, very sensitive to errors in the measured intensities. (orig.)

  17. Reflections of the social environment in chimpanzee memory: applying rational analysis beyond humans

    Science.gov (United States)

    Marewski, Julian N.; Schooler, Lael J.; Gilby, Ian C.

    2016-01-01

    In cognitive science, the rational analysis framework allows modelling of how physical and social environments impose information-processing demands onto cognitive systems. In humans, for example, past social contact among individuals predicts their future contact with linear and power functions. These features of the human environment constrain the optimal way to remember information and probably shape how memory records are retained and retrieved. We offer a primer on how biologists can apply rational analysis to study animal behaviour. Using chimpanzees (Pan troglodytes) as a case study, we modelled 19 years of observational data on their social contact patterns. Much like humans, the frequency of past encounters in chimpanzees linearly predicted future encounters, and the recency of past encounters predicted future encounters with a power function. Consistent with the rational analyses carried out for human memory, these findings suggest that chimpanzee memory performance should reflect those environmental regularities. In re-analysing existing chimpanzee memory data, we found that chimpanzee memory patterns mirrored their social contact patterns. Our findings hint that human and chimpanzee memory systems may have evolved to solve similar information-processing problems. Overall, rational analysis offers novel theoretical and methodological avenues for the comparative study of cognition. PMID:27853606

  18. Global Appearance Applied to Visual Map Building and Path Estimation Using Multiscale Analysis

    Directory of Open Access Journals (Sweden)

    Francisco Amorós

    2014-01-01

    Full Text Available In this work we present a topological map building and localization system for mobile robots based on global appearance of visual information. We include a comparison and analysis of global-appearance techniques applied to wide-angle scenes in retrieval tasks. Next, we define multiscale analysis, which permits improving the association between images and extracting topological distances. Then, a topological map-building algorithm is proposed. At first, the algorithm has information only of some isolated positions of the navigation area in the form of nodes. Each node is composed of a collection of images that covers the complete field of view from a certain position. The algorithm solves the node retrieval and estimates their spatial arrangement. With these aims, it uses the visual information captured along some routes that cover the navigation area. As a result, the algorithm builds a graph that reflects the distribution and adjacency relations between nodes (map. After the map building, we also propose a route path estimation system. This algorithm takes advantage of the multiscale analysis. The accuracy in the pose estimation is not reduced to the nodes locations but also to intermediate positions between them. The algorithms have been tested using two different databases captured in real indoor environments under dynamic conditions.

  19. Multilayers quantitative X-ray fluorescence analysis applied to easel paintings.

    Science.gov (United States)

    de Viguerie, Laurence; Sole, V Armando; Walter, Philippe

    2009-12-01

    X-ray fluorescence spectrometry (XRF) allows a rapid and simple determination of the elemental composition of a material. As a non-destructive tool, it has been extensively used for analysis in art and archaeology since the early 1970s. Whereas it is commonly used for qualitative analysis, recent efforts have been made to develop quantitative treatment even with portable systems. However, the interpretation of the results obtained with this technique can turn out to be problematic in the case of layered structures such as easel paintings. The use of differential X-ray attenuation enables modelling of the various layers: indeed, the absorption of X-rays through different layers will result in modification of intensity ratio between the different characteristic lines. This work focuses on the possibility to use XRF with the fundamental parameters method to reconstruct the composition and thickness of the layers. This method was tested on several multilayers standards and gives a maximum error of 15% for thicknesses and errors of 10% for concentrations. On a painting test sample that was rather inhomogeneous, the XRF analysis provides an average value. This method was applied in situ to estimate the thickness of the layers a painting from Marco d'Oggiono, pupil of Leonardo da Vinci.

  20. Practical considerations for sensitivity analysis after multiple imputation applied to epidemiological studies with incomplete data

    Science.gov (United States)

    2012-01-01

    Background Multiple Imputation as usually implemented assumes that data are Missing At Random (MAR), meaning that the underlying missing data mechanism, given the observed data, is independent of the unobserved data. To explore the sensitivity of the inferences to departures from the MAR assumption, we applied the method proposed by Carpenter et al. (2007). This approach aims to approximate inferences under a Missing Not At random (MNAR) mechanism by reweighting estimates obtained after multiple imputation where the weights depend on the assumed degree of departure from the MAR assumption. Methods The method is illustrated with epidemiological data from a surveillance system of hepatitis C virus (HCV) infection in France during the 2001–2007 period. The subpopulation studied included 4343 HCV infected patients who reported drug use. Risk factors for severe liver disease were assessed. After performing complete-case and multiple imputation analyses, we applied the sensitivity analysis to 3 risk factors of severe liver disease: past excessive alcohol consumption, HIV co-infection and infection with HCV genotype 3. Results In these data, the association between severe liver disease and HIV was underestimated, if given the observed data the chance of observing HIV status is high when this is positive. Inference for two other risk factors were robust to plausible local departures from the MAR assumption. Conclusions We have demonstrated the practical utility of, and advocate, a pragmatic widely applicable approach to exploring plausible departures from the MAR assumption post multiple imputation. We have developed guidelines for applying this approach to epidemiological studies. PMID:22681630

  1. Multivariate analysis in the frequency mastery applied to the Laguna Verde Central

    International Nuclear Information System (INIS)

    Castillo D, R.; Ortiz V, J.; Calleros M, G.

    2006-01-01

    The noise analysis is an auxiliary tool in the detection of abnormal operation conditions of equipment, instruments or systems that affect to the dynamic behavior of the reactor. The spectral density of normalized power has usually been used (NPSD, by its initials in English), to watch over the behavior of some components of the reactor, for example, the jet pumps, the recirculation pumps, valves of flow control in the recirculation knots, etc. The behavior change is determined by individual analysis of the NPSD of the signals of the components in study. An alternative analysis that can allow to obtain major information on the component under surveillance is the multivariate autoregressive analysis (MAR, by its initials in English), which allows to know the relationship that exists among diverse signals of the reactor systems, in the time domain. In the space of the frequency, the relative contribution of power (RPC for their initials in English) it quantifies the influence of the variables of the systems on a variable of interest. The RPC allows, therefore that for a peak shown in the NPSD of a variable, it can be determine the influence from other variables to that frequency of interest. This facilitates, in principle, the pursuit of the important physical parameters during an event, and to study their interrelation. In this work, by way of example of the application of the RPC, two events happened in the Laguna Verde Central are analyzed: the rods blockade alarms by high scale in the monitors of average power, in which it was presents a power peak of 12% of width peak to peak, and the power oscillations event. The main obtained result of the analysis of the control rods blockade alarm event was that it was detected that the power peak observed in the signals of the average power monitors was caused by the movement of the valve of flow control of recirculation of the knot B. In the other oscillation event the results its show the mechanism of the oscillation of

  2. Laws' masks descriptors applied to bone texture analysis: an innovative and discriminant tool in osteoporosis

    International Nuclear Information System (INIS)

    Rachidi, M.; Marchadier, A.; Gadois, C.; Lespessailles, E.; Chappard, C.; Benhamou, C.L.

    2008-01-01

    The objective of this study was to explore Laws' masks analysis to describe structural variations of trabecular bone due to osteoporosis on high-resolution digital radiographs and to check its dependence on the spatial resolution. Laws' masks are well established as one of the best methods for texture analysis in image processing and are used in various applications, but not in bone tissue characterisation. This method is based on masks that aim to filter the images. From each mask, five classical statistical parameters can be calculated. The study was performed on 182 healthy postmenopausal women with no fractures and 114 age-matched women with fractures [26 hip fractures (HFs), 29 vertebrae fractures (VFs), 29 wrist fractures (WFs) and 30 other fractures (OFs)]. For all subjects radiographs were obtained of the calcaneus with a new high-resolution X-ray device with direct digitisation (BMA, D3A, France). The lumbar spine, femoral neck, and total hip bone mineral density (BMD) were assessed by dual-energy X-ray absorptiometry. In terms of reproducibility, the best results were obtained with the TR E5E5 mask, especially for three parameters: ''mean'', ''standard deviation'' and ''entropy'' with, respectively, in vivo mid-term root mean square average coefficient of variation (RMSCV)%=1.79, 4.24 and 2.05. The ''mean'' and ''entropy'' parameters had a better reproducibility but ''standard deviation'' showed a better discriminant power. Thus, for univariate analysis, the difference between subjects with fractures and controls was significant (P -3 ) and significant for each fracture group independently (P -4 for HF, P=0.025 for VF and P -3 for OF). After multivariate analysis with adjustment for age and total hip BMD, the difference concerning the ''standard deviation'' parameter remained statistically significant between the control group and the HF and VF groups (P -5 , and P=0.04, respectively). No significant correlation between these Laws' masks parameters and

  3. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Science.gov (United States)

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  4. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    Science.gov (United States)

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  5. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    Science.gov (United States)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  6. Raman spectroscopy and capillary electrophoresis applied to forensic colour inkjet printer inks analysis.

    Science.gov (United States)

    Król, Małgorzata; Karoly, Agnes; Kościelniak, Paweł

    2014-09-01

    Forensic laboratories are increasingly engaged in the examination of fraudulent documents, and what is important, in many cases these are inkjet-printed documents. That is why systematic approaches to inkjet printer inks comparison and identification have been carried out by both non-destructive and destructive methods. In this study, micro-Raman spectroscopy and capillary electrophoresis (CE) were applied to the analysis of colour inkjet printer inks. Micro-Raman spectroscopy was used to study the chemical composition of colour inks in situ on a paper surface. It helps to characterize and differentiate inkjet inks, and can be used to create a spectra database of inks taken from different cartridge brands and cartridge numbers. Capillary electrophoresis in micellar electrophoretic capillary chromatography mode was applied to separate colour and colourless components of inks, enabling group identification of those components which occur in a sufficient concentration (giving intensive peaks). Finally, on the basis of the obtained results, differentiation of the analysed inks was performed. Twenty-three samples of inkjet printer inks were examined and the discriminating power (DP) values for both presented methods were established in the routine work of experts during the result interpretation step. DP was found to be 94.0% (Raman) and 95.6% (CE) when all the analysed ink samples were taken into account, and it was 96.7% (Raman) and 98.4% (CE), when only cartridges with different index numbers were considered. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden

    International Nuclear Information System (INIS)

    Hill, M.

    2001-01-01

    This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework

  8. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Directory of Open Access Journals (Sweden)

    Wensheng Dai

    2014-01-01

    Full Text Available Sales forecasting is one of the most important issues in managing information technology (IT chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR, is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA, temporal ICA (tICA, and spatiotemporal ICA (stICA to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  9. Density profile analysis during an ELM event in ASDEX Upgrade H-modes

    International Nuclear Information System (INIS)

    Nunes, I.; Manso, M.; Serra, F.; Horton, L.D.; Conway, G.D.; Loarte, A.

    2005-01-01

    This paper reports results on measurements of the density profiles. Here we analyse the behaviour of the electron density for a set of experiments in type I ELMy H-mode discharges in ASDEX Upgrade where the plasma current, plasma density, triangularity and input power were varied. Detailed measurements of the radial extent of the perturbation on the density profiles caused by the edge localized mode (ELM) crash (ELM affected depth), the velocity of the radial propagation of the perturbation as well as the width and gradient of the density pedestal are determined. The effect of a type I ELM event on the density profiles affects the outermost 20-40% of the plasma minor radius. At the scrape-off layer (SOL) the density profile broadens while in the pedestal region the density decreases resulting in a smaller density gradient. This change in the density profile defines a pivot point around which the density profile changes. The average radial velocity at the SOL is in the range 125-150 ms -1 and approximately constant for all the density layers far from the pivot point. The width of the density pedestal is approximately constant for all the ELMy H-mode discharges analysed, with values between 2 and 3.5 cm. These results are then compared with an analytical model where the width of the density is predominantly set by ionization (neutral penetration model). The width of the density profiles for L-mode discharges is included, since L- and H-mode have different particle transport. No agreement between the experimental results and the model is found

  10. Methods of economic analysis applied to fusion research. Fourth annual report

    International Nuclear Information System (INIS)

    Hazelrigg, G.A. Jr.

    1980-01-01

    The current study reported here has involved three separate tasks. The first task deals with the development of expected utility analysis techniques for economic evaluation of fusion research. A decision analytic model is developed for the incorporation of market uncertainties, as well as technological uncertainties in an economic evaluation of long-range energy research. The model is applied to the case of fusion research. The second task deals with the potential effects of long-range energy RD and D on fossil fuel prices. ECON's previous fossil fuel price model is extended to incorporate a dynamic demand function. The dynamic demand function supports price fluctuations such as those observed in the marketplace. The third task examines alternative uses of fusion technologies, specifically superconducting technologies and first wall materials to determine the potential for alternative, nonfusion use of these technologies. In both cases, numerous alternative uses are found

  11. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  12. Graph theory applied to noise and vibration control in statistical energy analysis models.

    Science.gov (United States)

    Guasch, Oriol; Cortés, Lluís

    2009-06-01

    A fundamental aspect of noise and vibration control in statistical energy analysis (SEA) models consists in first identifying and then reducing the energy flow paths between subsystems. In this work, it is proposed to make use of some results from graph theory to address both issues. On the one hand, linear and path algebras applied to adjacency matrices of SEA graphs are used to determine the existence of any order paths between subsystems, counting and labeling them, finding extremal paths, or determining the power flow contributions from groups of paths. On the other hand, a strategy is presented that makes use of graph cut algorithms to reduce the energy flow from a source subsystem to a receiver one, modifying as few internal and coupling loss factors as possible.

  13. Econometrics analysis of consumer behaviour: a linear expenditure system applied to energy

    International Nuclear Information System (INIS)

    Giansante, C.; Ferrari, V.

    1996-12-01

    In economics literature the expenditure system specification is a well known subject. The problem is to define a coherent representation of consumer behaviour through functional forms easy to calculate. In this work it is used the Stone-Geary Linear Expenditure System and its multi-level decision process version. The Linear Expenditure system is characterized by an easy calculating estimation procedure, and its multi-level specification allows substitution and complementary relations between goods. Moreover, the utility function separability condition on which the Utility Tree Approach is based, justifies to use an estimation procedure in two or more steps. This allows to use an high degree of expenditure categories disaggregation, impossible to reach the Linear Expediture System. The analysis is applied to energy sectors

  14. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  15. Pretreatment procedures applied to samples to be analysed by neutron activation analysis at CDTN/CNEN

    International Nuclear Information System (INIS)

    Francisco, Dovenir; Menezes, Maria Angela de Barros Correia

    2009-01-01

    The neutron activation technique - using several methods - has been applied in 80% of the analytical demand of Division for Reactor and Analytical Techniques at CDTN/CNEN, Belo Horizonte, Minas Gerais. This scenario emphasizes the responsibility of the Laboratory to provide and assure the quality of the measurements. The first step to assure the results quality is the preparation of the samples. Therefore, this paper describes the experimental procedures adopted at CDTN/CNEN in order to uniform conditions of analysis and to avoid contaminations by elements present everywhere. Some of the procedures are based on methods described in the literature; others are based on many years of experience preparing samples from many kinds of matrices. The procedures described are related to geological material - soil, sediment, rock, gems, clay, archaeological ceramics and ore - biological materials - hair, fish, plants, food - water, etc. Analytical results in sediment samples are shown as n example pointing out the efficiency of the experimental procedure. (author)

  16. Common faults in turbines and applying neural networks in order to fault diagnostic by vibration analysis

    International Nuclear Information System (INIS)

    Masoudifar, M.; AghaAmini, M.

    2001-01-01

    Today the fault diagnostic of the rotating machinery based on the vibration analysis is an effective method in designing predictive maintenance programs. In this method, vibration level of the turbines is monitored and if it is higher than the allowable limit, vibrational data will be analyzed and the growing faults will be detected. But because of the high complexity of the system monitoring, the interpretation of the measured data is more difficult. Therefore, design of the fault diagnostic expert systems by using the expert's technical experiences and knowledge; seem to be the best solution. In this paper,at first several common faults in turbines are studied and the how applying the neural networks to interpret the vibrational data for fault diagnostic is explained

  17. Applied behavior analysis programs for autism: sibling psychosocial adjustment during and following intervention use.

    Science.gov (United States)

    Cebula, Katie R

    2012-05-01

    Psychosocial adjustment in siblings of children with autism whose families were using a home-based, applied behavior analysis (ABA) program was compared to that of siblings in families who were not using any intensive autism intervention. Data gathered from parents, siblings and teachers indicated that siblings in ABA families experienced neither significant drawbacks nor benefits in terms of their behavioral adjustment, sibling relationship quality and self-concept compared to control group siblings, either during or following intervention use. Parents and siblings perceived improvements in sibling interaction since the outset of ABA, with parents somewhat more positive in their views than were siblings. Social support was associated with better sibling outcomes in all groups. Implications for supporting families using ABA are considered.

  18. [Matrix analysis of the client's voice: QFD applied to healthcare management].

    Science.gov (United States)

    Lorenzo, Susana; Mira, José; Olarte, Mayerly; Guerrero, Johana; Guerrero, Johann; Moyano, Silvia

    2004-01-01

    To apply quality function deployment (QFD) methodology to identify clients' needs by relating complaints with perceived quality domains. A hospital within the Public Health Service of Madrid. Matrix analysis based on the QFD model was performed, using the surveys (1998-2002) conducted in the hospital with the Servqhos questionnaire and a sample of 363 complaints made in 2002. The complaints analyzed were selected using a non-probabilistic sampling method. QFD methodology was highly useful, allowing complaints to be related to the results of a perceived quality questionnaire and identification of the attributes with the greatest influence on patient satisfaction. It also allowed us to identify areas for improvement according to clients' needs.

  19. Challenges in the implementation of a quality management system applied to radiometric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Danila C.S.; Bonifacio, Rodrigo L.; Nascimento, Marcos R.L.; Silva, Nivaldo C. da; Taddei, Maria Helena T., E-mail: danilacdias@gmail.com [Comissao Nacional de Energia Nuclear (LAPOC/CNEN-MG), Pocos de Caldas, MG (Brazil). Laboratorio de Pocos de Caldas

    2015-07-01

    The concept of quality in laboratories has been well established as an essential factor in the search for reliable results. Since its first version published (1999), the ISO/IEC 17025 has been applied in the industrial and research fields, in a wide range of laboratorial analyses. However, the implementation of a Quality Management System still poses great challenges to institutions and companies. The purpose of this work is to expose the constraints related to the implementation of ISO/IEC 17025 applied to analytical assays of radionuclides, accomplished by studying the case of the Pocos de Caldas Laboratory of the Brazilian Commission for Nuclear Energy. In this lab, a project of accreditation of techniques involving determination of radionuclides in water, soil, sediment and food samples has been conducted since 2011. The challenges presented by this project arise from the administrative view, where the governmental nature of the institution translates into unlevelled availability resources and the organizational view, whereas QMS requires inevitable changes in the organizational culture. It is important to point out that when it comes to accreditation of analysis involving radioactive elements, many aspects must be treated carefully due to the their very particular nature. Among these concerns are the determination of analysis uncertainties, accessibility to international proficiency studies, international radioactive samples and CRM transportation, the study of parameters on the validation of analytical methods and the lack of documentation and specialized personnel regarding quality at radiometric measurements. Through an effective management system, the institution is overcoming these challenges, moving toward the ISO/IEC 17025 accreditation. (author)

  20. Challenges in the implementation of a quality management system applied to radiometric analysis

    International Nuclear Information System (INIS)

    Dias, Danila C.S.; Bonifacio, Rodrigo L.; Nascimento, Marcos R.L.; Silva, Nivaldo C. da; Taddei, Maria Helena T.

    2015-01-01

    The concept of quality in laboratories has been well established as an essential factor in the search for reliable results. Since its first version published (1999), the ISO/IEC 17025 has been applied in the industrial and research fields, in a wide range of laboratorial analyses. However, the implementation of a Quality Management System still poses great challenges to institutions and companies. The purpose of this work is to expose the constraints related to the implementation of ISO/IEC 17025 applied to analytical assays of radionuclides, accomplished by studying the case of the Pocos de Caldas Laboratory of the Brazilian Commission for Nuclear Energy. In this lab, a project of accreditation of techniques involving determination of radionuclides in water, soil, sediment and food samples has been conducted since 2011. The challenges presented by this project arise from the administrative view, where the governmental nature of the institution translates into unlevelled availability resources and the organizational view, whereas QMS requires inevitable changes in the organizational culture. It is important to point out that when it comes to accreditation of analysis involving radioactive elements, many aspects must be treated carefully due to the their very particular nature. Among these concerns are the determination of analysis uncertainties, accessibility to international proficiency studies, international radioactive samples and CRM transportation, the study of parameters on the validation of analytical methods and the lack of documentation and specialized personnel regarding quality at radiometric measurements. Through an effective management system, the institution is overcoming these challenges, moving toward the ISO/IEC 17025 accreditation. (author)